Privacy-first document Q&A with local RAG
SafeQueryAI implements document question-answering using a local Retrieval-Augmented Generation (RAG) pipeline and session-based processing.
Browser (React + TypeScript)
|
| HTTP
v
ASP.NET Core API
|
+--> SessionService (session lifecycle)
+--> FileStorageService (temporary storage)
+--> TextExtractionService (PDF/CSV extraction)
+--> DocumentIndexingService (chunk + embed)
+--> VectorStoreService (in-memory retrieval)
+--> QuestionAnsweringService (RAG orchestration)
|
v
Ollama local LLM runtime (loopback URL only)
nomic-embed-text).llama3.2) using retrieved context.| Component | Responsibility |
|---|---|
| Controllers | HTTP endpoints for files, questions, sessions, health |
| Services | Business logic for indexing, storage, RAG, expiry |
| Contracts | Request/response DTOs |
| Models | Domain entities (SessionInfo, DocumentChunk, etc.) |
| Interfaces | Service abstractions for dependency injection |
| Component | Purpose |
|---|---|
| App.tsx | Main application component |
| QuestionForm | User input for questions |
| FileUploadPanel | Document upload interface |
| AnswerPanel | Streaming answer display |
| SessionInfo | Session details and management |
| UploadedFileList | List of files in session |
| Characteristic | Current Implementation |
|---|---|
| Storage model | Temporary storage + in-memory session state |
| Session timeout | 60 minutes inactivity |
| Supported file types | PDF, CSV |
| Upload size policy | 20 MB configured limit, 25 MB request ceiling |
| LLM runtime | Local Ollama only |
| API style | REST + SSE stream endpoint |