This project is intended for research and educational purposes only.
Please refrain from any commercial use and act responsibly when deploying or modifying this tool.
This project is an updated and improved fork of the original WebAI-to-API by Amm1rr. It acts as a bridge, converting the web-based Google Gemini interface into a standard OpenAI-compatible API format.
This specific fork has been heavily modified to support the newest generation of Gemini models and to ensure flawless integration with complex, modern AI frontends like Open WebUI.
- Bypassed 422 Errors: Resolved the
422 Unprocessable Entityerrors caused by strict payload validation. The API now safely ignores unexpected metadata (liketemperatureortop_p) sent by advanced frontends. - Dynamic Discovery: Added a dynamic
/v1/modelsendpoint so interfaces like Open WebUI can automatically discover and populate available models without manual configuration.
- No More Hardcoding: Hardcoded model names have been completely removed from the Python source code (
request.pyandchat.py). - Global Config: Model definitions are now read globally from
config.conf. When new models (like Gemini 3.1 or 4.0) are released, you only need to update your text file—no code edits or Docker image rebuilds required. - Safe Startup: Fixed a bug where the application would silently overwrite mapped configuration files with blank defaults upon container restart.
- Plug-and-Play: The
Dockerfilehas been rewritten for true "plug-and-play" deployment on home servers, Unraid environments, and standard Docker engines. - Baked-in Variables: Essential environment variables (like
PYTHONPATH=/app/src) and performance flags (like--workers 4) are now baked directly into the image, eliminating the need for complex manual orchestrator configurations. - Absolute Paths: File paths have been converted to absolute paths to prevent silent overwrites of mapped persistent volumes.
Your config.conf file should be mapped to /app/config.conf inside the container. Define your models under the [AI] section like this:
[AI]
default_ai = gemini
default_model_gemini = gemini-3.0-pro
model_pro = gemini-3.0-pro
model_flash = gemini-3.0-flash
model_thinking = gemini-3.0-flash-thinkingdocker run -d \
--name webai_server \
-p 6969:6969 \
-v /path/to/your/config.conf:/app/config.conf \
bluerocky/webai:latestPoint your compatible frontend (NextChat, Open WebUI, etc.) to http://<your-server-ip>:6969/v1 and use any string for the API key.