Skip to content

suri1155/simple_llm_service

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Simple LLM Service

This repository contains a small end-to-end example LLM application. The project demonstrates a minimal production-like setup with a FastAPI backend serving a LangChain-based LLM service, a React frontend, PostgreSQL for persistent storage, and Redis for caching/rate-limiting.

Overview

  • Purpose: Example app showing how to wire a simple LLM service into a web app with persistent storage and caching.
  • Backend: backend_service — FastAPI app exposing API endpoints and the LangChain-powered LLM service.
  • Frontend: frontend_service — React app that interacts with the backend.
  • Data stores: PostgreSQL for storing application data (users, query logs) and Redis for caching and rate limiting.

Tech Stack

  • LLM / Orchestration: LangChain
  • API: FastAPI + Uvicorn
  • Frontend: React
  • DB / Cache: PostgreSQL, Redis
  • Containerization: Docker & Docker Compose

Repository Layout

  • backend_service/ — backend implementation, configs, and its own README.md.
  • frontend_service/ — React frontend and its README.md.
  • docker-compose.yml — orchestrates backend, frontend, Postgres, Redis, and other services.

OpenAI Model Configuration

This application uses OpenAI models via LangChain. To use the service, you need:

  1. OPENAI_API_KEY

Notes & Links

Start End to End Application To spin up the entire stack quickly

docker-compose up --build

Accessing the Application

After starting the project, you can access the following:

Frontend (UI)

This is the main user interface of the application.

Backend API (Swagger Page)

License

MIT License

Support

For issues and questions, please create an issue in the repository.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors