Privacy-first document Q&A with local RAG
You need internet access only to install dependencies and pull Ollama models. After that, SafeQueryAI can operate locally.
http://localhost:5173http://localhost:5000/apihttp://localhost:5000/swaggerhttp://localhost:11434SafeQueryAI supports document question-answering on uploaded PDF and CSV files using session-based processing.
.pdf.csvConfigured upload limit is 20 MB per file, with a 25 MB request ceiling.
Sessions expire after 60 minutes of inactivity by default.
Yes. The session clear action removes temporary files and in-memory index data immediately.
Document processing occurs locally in the running environment through the backend and local LLM runtime.
No cloud upload flow is implemented in the current architecture.
No. SafeQueryAI uses temporary storage and in-memory session data that are removed on clear or expiry.
No external telemetry integration is documented in the current implementation.
Yes. Backend startup validates that the Ollama URL is a loopback address.
See API Reference.
Yes. Use POST /api/sessions/{sessionId}/questions/stream with text/event-stream.
Yes. Backend and frontend test suites are documented in Testing.
Not in the current release.
Not in the current release.
No. Current documentation describes a local, privacy-first architecture with session-based processing.
Not in the current release scope.
See Roadmap for planned reliability, retrieval, and operational improvements.