- Replace S3 bigquery_tables metadata lookup with paginated GraphQL API call
to fetch table and column descriptions from Base dos Dados
- Add gera_schemas.py for schema compilation and S3 inventory
- Add schemas.json and file_tree.md as generated reference artifacts
- Add websocket proxy in Caddyfile for ttyd on port 7681
- Ignore generated context/ artifacts in .gitignore
- Add openai to requirements.txt
- start.sh: remove prepara_db.py step; load S3 creds via DuckDB init file
- Caddyfile: switch to basic_auth with {env.BASIC_AUTH_HASH} — no rebuild to rotate password
- Dockerfile: drop Python/pip layers (no longer needed at runtime)
- haloy.yml: set server to 89.167.95.136, add BASIC_AUTH_HASH to env
- remove requirements.txt (only needed for local prepara_db.py, not the container)