Skip to content

steathy/WebAI

Repository files navigation

Disclaimer

This project is intended for research and educational purposes only.
Please refrain from any commercial use and act responsibly when deploying or modifying this tool.


WebAI-to-API (Enhanced Fork)

This project is an updated and improved fork of the original WebAI-to-API by Amm1rr. It acts as a bridge, converting the web-based Google Gemini interface into a standard OpenAI-compatible API format.

This specific fork has been heavily modified to support the newest generation of Gemini models and to ensure flawless integration with complex, modern AI frontends like Open WebUI.

✨ Key Improvements in this Version

Seamless Open WebUI Integration

  • Bypassed 422 Errors: Resolved the 422 Unprocessable Entity errors caused by strict payload validation. The API now safely ignores unexpected metadata (like temperature or top_p) sent by advanced frontends.
  • Dynamic Discovery: Added a dynamic /v1/models endpoint so interfaces like Open WebUI can automatically discover and populate available models without manual configuration.

Dynamic, Future-Proof Configuration

  • No More Hardcoding: Hardcoded model names have been completely removed from the Python source code (request.py and chat.py).
  • Global Config: Model definitions are now read globally from config.conf. When new models (like Gemini 3.1 or 4.0) are released, you only need to update your text file—no code edits or Docker image rebuilds required.
  • Safe Startup: Fixed a bug where the application would silently overwrite mapped configuration files with blank defaults upon container restart.

Optimized Docker Deployment

  • Plug-and-Play: The Dockerfile has been rewritten for true "plug-and-play" deployment on home servers, Unraid environments, and standard Docker engines.
  • Baked-in Variables: Essential environment variables (like PYTHONPATH=/app/src) and performance flags (like --workers 4) are now baked directly into the image, eliminating the need for complex manual orchestrator configurations.
  • Absolute Paths: File paths have been converted to absolute paths to prevent silent overwrites of mapped persistent volumes.

⚙️ Configuration Setup

Your config.conf file should be mapped to /app/config.conf inside the container. Define your models under the [AI] section like this:

[AI]
default_ai = gemini
default_model_gemini = gemini-3.0-pro
model_pro = gemini-3.0-pro
model_flash = gemini-3.0-flash
model_thinking = gemini-3.0-flash-thinking

🚀 Quick Start (Docker)

docker run -d \
  --name webai_server \
  -p 6969:6969 \
  -v /path/to/your/config.conf:/app/config.conf \
  bluerocky/webai:latest

Point your compatible frontend (NextChat, Open WebUI, etc.) to http://<your-server-ip>:6969/v1 and use any string for the API key.

About

This project is an updated and improved fork of the original WebAI-to-API by Amm1rr. It acts as a bridge, converting the web-based Google Gemini interface into a standard OpenAI-compatible API format.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors