This Streamlit application is an AI copilot for NYC Department of Buildings (DOB) code questions and permit application reviews. It uses NVIDIA NIM language models via LangChain to answer questions about the 2022 NYC Construction Codes and any additional reference sites you provide. Optionally, you can upload PDF permit/application documents; the app will chunk and embed them with FAISS so responses can cite and reason over your own documents alongside official code references.
- NYC focus: Tailored to the 2022 NYC Construction Codes with links and citations
- Q&A chat: Ask free‑form questions; answers stream in real time
- Document review (RAG): Upload building permit applications in PDFs, vectorized with FAISS for retrieval‑augmented answers
- Reference curation: Add/remove trusted reference websites in the sidebar
- Ask a question directly in the chat, or click an example prompt in the sidebar.
- Optionally upload one or more PDF files, click "Process Documents" to embed them, then ask questions referencing their content.
- Sample file included: Use
building_permit_application_sample.pdfto test the document review functionality. - Add additional reference websites in the sidebar; the app will include them in the system prompt for better citations.
- Frontend: Streamlit
- Model: NVIDIA NIM
meta/llama-3.1-70b-instructvialangchain_nvidia_ai_endpoints - RAG:
langchain,faiss-cpu,pypdf
- Clone and enter the repo
git clone <your-fork-or-repo-url>
cd Building_Permit_NIM_github- Create a virtual environment and install dependencies
python3 -m venv .venv
source .venv/bin/activate # Windows: .venv\\Scripts\\activate
pip3 install -r requirement.txt- Configure environment variables (create a
.envfile)
# .env
NVIDIA_API_KEY=YOUR_NVIDIA_API_KEY- Run the app
streamlit run app.pyThen open the URL shown (default http://localhost:8501).
- Default LLM:
meta/llama-3.1-70b-instructvia NVIDIA NIM Cloud version. You can point to a local compatible endpoint by adjusting theChatNVIDIAinitialization inapp.py.
llm = ChatNVIDIA(base_url="http://0.0.0.0:8005/v1", model="llama-3.1-70b-instruct", streaming=True)- Ports: Streamlit runs on 8501 (exposed in Docker and manifests).
- Requirements file: local installs use
requirement.txt.
Build and run with Docker:
docker build -t building-app:latest .
# pass env via file or -e; expose Streamlit port
docker run --rm -p 8501:8501 --env-file .env building-app:latest- Push an image that your cluster can pull (or adjust the image in
building-app-deployment.yaml). - Provide the
NVIDIA_API_KEYby updating placeholder value inbuilding-app-deployment.yaml. - Apply manifests:
kubectl apply -f building-app-deployment.yaml
# Or using the oc command
oc apply -f building-apply-deployement.yaml
# OpenShift route (for external access)
oc apply -f building-app-route.yamlAccess the app via the Service/Route URL provided by your cluster.
Security note: Avoid committing API keys to source control. Use environment variables or Kubernetes Secrets.
app.py # Streamlit app
Dockerfile # Container build
requirement.txt # Python dependencies
building-app-deployment.yaml # Deployment + Service
building-app-route.yaml # OpenShift Route
README.md # You are here
