Skip to content

Nkaleth/oracle_api

Repository files navigation

Oracle API | AI Orchestrator Backend

A specialized Ruby on Rails 8 REST API for local LLM orchestration, technical guidance, and real-time inference streaming.

Project Architecture

Engineered with Rails 8, Ollama, and Sidekiq for high-performance AI processing.

📗 Table of Contents

📖 Oracle API - Backend

Oracle API is a technical orchestrator that bridges user requests with local Large Language Models. Built on Rails 8, it utilizes a service-oriented architecture to manage AI queries via Ollama, ensuring a responsive UI through asynchronous job processing and real-time state synchronization.

🔗 Video Presentation: Oracle API (Ruby on Rails + Ollama)

🔗 Frontend Repository: Oracle Chat (Next.js)

🛠 Built With

Tech Stack

Core & Jobs
AI & Security
  • Ollama (Local LLM Inference)
  • JWT (Stateless Authentication)
  • Kredis (Idempotency Layer)

Key Features

  • Asynchronous AI Inference: Uses Sidekiq to process heavy AI requests without blocking the main thread.
  • Sidekiq-Rails Sync: Real-time synchronization via Redis to update the conversation state once the LLM finishes inference.
  • Service-Oriented Design: Dedicated Service Objects to encapsulate the complexity of Ollama API interactions.
  • JWT Authentication: Secure and stateless user access control.
  • Request Idempotency: Powered by Kredis to prevent duplicate generations during network instability.
  • Smart Seeding: Includes predefined System Prompts and test users to accelerate development environment setup.

(back to top)

💻 Getting Started

Follow these steps to set up the backend orchestrator locally.

Prerequisites

  • Ruby 3.3.0+
  • PostgreSQL
  • Redis
  • Ollama (Installed and running)

Setup

Clone the repository:

git clone [https://github.com/Nkaleth/oracle_api.git](https://github.com/Nkaleth/oracle_api.git)
cd oracle_api

Install

Install dependencies:

bundle install

Database Setup

Configure your database.yml and run:

rails db:prepare
rails db:seed

AI Model Setup (Ollama)

  1. Ensure Ollama is running: ollama serve
  2. Download the preferred model:
ollama run llama3 # or your specific model

Usage

Start the background workers and the server:

# Terminal 1: Background Jobs
bundle exec sidekiq

# Terminal 2: Rails Server
rails s

The API will be available at http://localhost:3000.

(back to top)

👥 Authors

👤 Nilton Segura

(back to top)

🔭 Future Features

  • Smart Pagination (Pagy): Activate the pre-installed Pagy gem to handle large message histories efficiently.
  • Fine-grained Authorization: Implement Pundit or CanCanCan for secure resource access control (RBAC).
  • Vector Search (RAG): Integrate a vector database to provide the AI with local documentation context.
  • Multi-Model Support: Dynamic model selection per conversation thread.
  • Rate Limiting: Throttling for AI inference endpoints using rack-attack.

(back to top)

🤝 Contributing

Feel free to fork this project and submit pull requests. For major changes, please open an issue first to discuss what you would like to change.

(back to top)

⭐️ Show your support

If you find this backend architecture useful, please give it a star! 🌟

(back to top)

🙏 Acknowledgements

  • The Rails team for the Rails 8 revolution and the rock-solid Action Cable architecture.
  • The Sidekiq community for providing the gold standard in high-performance background processing.
  • The Ollama community for making local LLM deployment accessible and efficient for developers.

(back to top)

📝 License

This project is MIT licensed.

(back to top)

About

Oracle API is a technical orchestrator that bridges user requests with local Large Language Models. Built on Rails 8

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors