gpt-4o-mini is a lightweight AI chat proxy project that combines:
- a Python command-line chat client in
Application/chatbot.py - a Node.js/Express proxy server in
Server/index.js - a Vercel deployment config in
Server/vercel.json
The Python client sends user prompts and conversation history to the proxy server, which forwards requests to a remote GPT-style chat API endpoint. The proxy builds a normalized message payload, applies rate limiting, and returns the assistant reply to the CLI client.
Application/
├─ Chatbot.exe # Optional Windows binary version of the CLI client
└─ chatbot.py # Python chat client
Server/
├─ index.js # Express proxy server implementation
├─ package.json # Node project metadata and dependencies
├─ package-lock.json # Lockfile for Node dependencies
└─ vercel.json # Vercel deployment configuration
The Python script provides a terminal-based chat interface.
Key behaviors:
- uses
requeststo POST chat messages to the server endpoint - uses
coloramato colorize prompt and response text - maintains conversation history in-memory
- supports commands:
clear— reset the stored chat history and start a new conversationexit— quit the application
- normalizes escaped characters from the remote server response
The Express server acts as a relay between the Python client and the remote GPT API.
Key features:
POST /api/chataccepts a JSON body withchatHistoryandnewMessage- constructs a chat payload from history and the new user message
- sends the request upstream to
https://chatplus.com/api/chat - randomizes IP and user-agent headers for each call
- enforces a daily rate limit of 1000 requests using
express-rate-limit - parses the raw text response to extract the assistant reply
- returns a JSON response with
success: trueandreply - provides a health endpoint at
GET /
This file configures Vercel to deploy Server/index.js as a Node serverless function.
Routes:
- all paths (
/(.*)) are routed toindex.js
- Open a terminal in
Server/ - Install dependencies:
cd Server
npm install- For local development, note that
Server/index.jsexports the Express app but does not currently start a listening server by itself. To run locally, you can either deploy it as a Vercel serverless app or add a smallapp.listen(...)wrapper.
Example local wrapper (not included in this repo):
const app = require("./index");
const port = process.env.PORT || 3000;
app.listen(port, () => console.log(`Server listening on ${port}`));- Install Python dependencies:
pip install requests colorama- Run the chat client:
python Application/chatbot.pyIf you are on Windows and prefer a prebuilt binary, Application/Chatbot.exe is included.
If deployed to Vercel, the server will be available as a remote endpoint.
For local testing with a listener wrapper, start the server and note the URL. Then update the Python client URL if needed.
Run:
python Application/chatbot.pyThen type a message and press Enter.
Supported commands:
clear— clear conversation historyexit— close the client
Example session:
>> Hello
>> What can you do?
>> clear
>> exit
The Python client sends a POST request to https://free-gpt-4o-mini-api.vercel.app/api/chat with payload:
{
"chatHistory": [
{"user": "...", "assistant": "..."},
...
],
"newMessage": "..."
}The proxy server maps the incoming chat history into the upstream API format:
- each user turn becomes a
role: "user"message - each assistant turn becomes a
role: "assistant"message - the new user prompt is appended as the final message
- message items include an
id,createdAt,content, andparts selectedChatModelIdis set togpt-4o-mini
The upstream API returns text lines. The proxy extracts lines starting with 0: and strips escaped strings to return clean assistant text.
This project is already configured for deployment on Vercel using Server/index.js as the entrypoint.
Server/vercel.jsoncontrols build and route configuration@vercel/nodeis used for serverless deployment
To run locally, add a listen wrapper or use a simple server start script, then start the server with node server.js or a similar file.
If the Python client must point to a local server, update the URL in Application/chatbot.py:
url = "http://localhost:3000/api/chat"- This repository does not use an official OpenAI API key. The proxy forwards chat messages to a public endpoint at
https://chatplus.com/api/chat. - The server sets a daily rate limit of 1000 requests.
- The current server code is tailored for deployment as a serverless app and may require a small wrapper for local
node index.jsexecution. - Response formatting includes escape sequence normalization so that output displays clean text in the terminal.
If you want to enhance this project, consider:
- adding a local
app.listen(...)startup entrypoint for Node - making the client endpoint configurable via environment variables
- persisting chat history to disk or a database
- adding authentication or abuse protection to the proxy
- adding more robust parsing for upstream API responses
This project does not include a license file. If you want to publish it, add a LICENSE or LICENSE.md file.
This API is provided for educational purposes only.
By using this API, you agree to the following:
-
No abuse – You will not overload, exploit, or misuse this API in any way, including but not limited to automated scraping, denial‑of‑service attacks, or unauthorized data extraction.
-
No liability – The creator of this API is not responsible for any damage, loss, legal consequences, data breaches, system failures, or any other issues resulting from the use or misuse of this API.
-
Use at your own risk – This API is offered “as is” without any warranty or guarantee of availability, accuracy, or security.
-
Educational only – This API is intended solely for learning, testing, and personal development. It is not approved for production or commercial use.
-
Compliance – You are solely responsible for complying with all applicable laws and regulations when using this API.
Build with ❤ by @FurqanAhmadKhan
Star it if you find it useful
