Running Llama 2 and other Open-Source LLMs on CPU Inference Locally for Document Q&A
-
Updated
Nov 6, 2023 - Python
Running Llama 2 and other Open-Source LLMs on CPU Inference Locally for Document Q&A
🧠 纯原生 Python 实现的 RAG 框架 | FAISS + BM25 混合检索 | 支持 Ollama / SiliconFlow | 适合新手入门学习
Method for Long Context RLMs using verifiable Lambda Calculus
Open source RAG tool for AI document search - connect GitHub, Notion, Google Drive and ask questions with cited answers. Self-hosted with Ollama/OpenAI/Claude.
PDFs you can talk to.
This Repositry is an experiment with an agent that searches documents and asks questions repeatedly in response to the main question. It automatically determines the optimal answer from the current documents or recognizes when there is no answer.
Open-source RAG engine for ingesting, indexing, and querying unstructured documents
Frank Bot — RAG-powered AI assistant for any business. Built on ChromaDB + Claude. Drop in your docs, ask Frank anything.
Production-ready RAG framework for Python — multi-tenant chatbots with streaming, tool calling, agent mode (LangGraph), vector search (FAISS), and persistent MongoDB memory. Built on LangChain.
Enterprise-grade RAG and document search system for extracting reliable insights from real-world data.
AI powered troubleshooting for ground support equipment. Deterministic RAG pipeline that ingests OEM maintenance manuals, answers with cited sources, and refuses when the documentation doesn't support a claim. Runs fully on-premises, no cloud APIs
🐋 DeepSeek-R1: Retrieval-Augmented Generation for Document Q&A 📄
An advanced, fully local, and GPU-accelerated RAG pipeline. Features a sophisticated LLM-based preprocessing engine, state-of-the-art Parent Document Retriever with RAG Fusion, and a modular, Hydra-configurable architecture. Built with LangChain, Ollama, and ChromaDB for 100% private, high-performance document Q&A.
A basic web interface for your personal Q&A bot with documents, based on KnowledgeGPT
ContextAgent is a production-ready AI assistant backend with RAG, LangChain, and FastAPI. It ingests documents, uses OpenAI embeddings, and stores vectors in ChromaDB 🐙
A full-stack RAG application that enables intelligent document Q&A. Upload PDFs, DOCX, or TXT files and ask questions powered by LangChain, ChromaDB, and Claude/GPT. Features smart chunking, semantic search, conversation memory, and source citations. Built with FastAPI & React + TypeScript.
An LLM-powered Slack bot built with Langchain.
Add a description, image, and links to the document-qa topic page so that developers can more easily learn about it.
To associate your repository with the document-qa topic, visit your repo's landing page and select "manage topics."