Privacy-first document Q&A with local RAG
This guide covers a clean local setup for SafeQueryAI.
| Tool | Minimum Version | Purpose |
|---|---|---|
| .NET SDK | 8.0 | Backend API |
| Node.js | 18 | Frontend build and dev server |
| Ollama | Current stable | Local LLM runtime |
| Service | URL |
|---|---|
| Frontend | http://localhost:5173 |
| Backend API | http://localhost:5000/api |
| Swagger UI | http://localhost:5000/swagger |
| Ollama | http://localhost:11434 |
git clone https://github.com/JYOshiro/SafeQueryAI.git
cd SafeQueryAI
ollama serve
In a second terminal:
ollama pull nomic-embed-text
ollama pull llama3.2
cd backend
dotnet restore
dotnet run
Backend should start on http://localhost:5000.
cd frontend
npm install
npm run dev
Frontend should start on http://localhost:5173.
http://localhost:5173..pdf or .csv file (up to 20 MB).| Setting | Default |
|---|---|
| Session timeout | 60 minutes of inactivity |
| Supported file types | .pdf, .csv |
| Configured max upload size | 20 MB |
| Request hard limit | 25 MB |
ollama serve.ollama list.Ollama:BaseUrl in backend config is http://localhost:11434.http://localhost:5000.http://localhost:5173.frontend/vite.config.ts points to http://localhost:5000..pdf or .csv.20 MB.If your machine uses a different ASP.NET local URL, update local run settings and align references across this documentation set.