NOTE: Make sure to have ENVIRONMENT as local in .env
ENVIRONMENT=local-
Ingest dockets/documents/comments from S3 into SQL:
docker-compose exec sql-client python IngestDocket.py <DOCKET_ID>
Example:
docker-compose exec sql-client python IngestDocket.py DOS-2022-0004 -
Verify insertion worked (optional):
docker-compose exec sql-client psql -h db -U postgres -d postgres"SELECT * FROM dockets;" -
Ingest comments into OpenSearch:
docker-compose exec ingest python /app/ingest.py(That runs the logic from your
ingest_all_comments()function using S3 bucket paths.) -
Test query again from the
queriescontainer or front end:docker-compose exec queries python query.py "National"
If you're running into unexpected errors, stale data, or inconsistent results, starting with a clean slate can help resolve hidden issues.
1. Drop all SQL tables:
docker-compose exec sql-client python DropTables.py2. (Optional) Verify no tables exist:
docker-compose exec sql-client psql -h db -U postgres -d postgres\dtYou should see:
Did not find any relations.
3. Delete OpenSearch indices (like comments, comments_extracted_text):
docker-compose exec ingest python /app/delete_index.pyThen type yes when prompted.
4. Recreate fresh tables in SQL:
docker-compose exec sql-client python CreateTables.py5. Re-ingest your data (SQL & OpenSearch):
# For SQL:
docker-compose exec sql-client python IngestDocket.py <DOCKET_ID>
# For OpenSearch (comments):
docker-compose exec ingest python /app/ingest.pyBy resetting your data infrastructure this way, you eliminate hidden state that might be causing issues.
If you want a completely clean slate including everything run:
docker compose down -v --remove-orphans
docker system prune -af --volumesThen rerun docker:
docker compose build --no-cache
docker compose up -dRemember to recreate the tables afterwards:
docker-compose exec sql-client python CreateTables.py