Skip to content

FurqanAhmadKhan/gpt-4o-mini-api

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 

Repository files navigation

gpt-4o-mini

Demo Image

Project Overview

gpt-4o-mini is a lightweight AI chat proxy project that combines:

  • a Python command-line chat client in Application/chatbot.py
  • a Node.js/Express proxy server in Server/index.js
  • a Vercel deployment config in Server/vercel.json

The Python client sends user prompts and conversation history to the proxy server, which forwards requests to a remote GPT-style chat API endpoint. The proxy builds a normalized message payload, applies rate limiting, and returns the assistant reply to the CLI client.


Project Structure

Application/
  ├─ Chatbot.exe        # Optional Windows binary version of the CLI client
  └─ chatbot.py         # Python chat client
Server/
  ├─ index.js           # Express proxy server implementation
  ├─ package.json       # Node project metadata and dependencies
  ├─ package-lock.json  # Lockfile for Node dependencies
  └─ vercel.json        # Vercel deployment configuration

Components

Python CLI Client (Application/chatbot.py)

The Python script provides a terminal-based chat interface.

Key behaviors:

  • uses requests to POST chat messages to the server endpoint
  • uses colorama to colorize prompt and response text
  • maintains conversation history in-memory
  • supports commands:
    • clear — reset the stored chat history and start a new conversation
    • exit — quit the application
  • normalizes escaped characters from the remote server response

Proxy Server (Server/index.js)

The Express server acts as a relay between the Python client and the remote GPT API.

Key features:

  • POST /api/chat accepts a JSON body with chatHistory and newMessage
  • constructs a chat payload from history and the new user message
  • sends the request upstream to https://chatplus.com/api/chat
  • randomizes IP and user-agent headers for each call
  • enforces a daily rate limit of 1000 requests using express-rate-limit
  • parses the raw text response to extract the assistant reply
  • returns a JSON response with success: true and reply
  • provides a health endpoint at GET /

Vercel Deployment (Server/vercel.json)

This file configures Vercel to deploy Server/index.js as a Node serverless function.

Routes:

  • all paths (/(.*)) are routed to index.js

Installation

Server setup

  1. Open a terminal in Server/
  2. Install dependencies:
cd Server
npm install
  1. For local development, note that Server/index.js exports the Express app but does not currently start a listening server by itself. To run locally, you can either deploy it as a Vercel serverless app or add a small app.listen(...) wrapper.

Example local wrapper (not included in this repo):

const app = require("./index");
const port = process.env.PORT || 3000;
app.listen(port, () => console.log(`Server listening on ${port}`));

Client setup

  1. Install Python dependencies:
pip install requests colorama
  1. Run the chat client:
python Application/chatbot.py

If you are on Windows and prefer a prebuilt binary, Application/Chatbot.exe is included.


Usage

Running the server

If deployed to Vercel, the server will be available as a remote endpoint.

For local testing with a listener wrapper, start the server and note the URL. Then update the Python client URL if needed.

Running the client

Run:

python Application/chatbot.py

Then type a message and press Enter.

Supported commands:

  • clear — clear conversation history
  • exit — close the client

Example session:

>> Hello
>> What can you do?
>> clear
>> exit

How It Works

Client request format

The Python client sends a POST request to https://free-gpt-4o-mini-api.vercel.app/api/chat with payload:

{
  "chatHistory": [
    {"user": "...", "assistant": "..."},
    ...
  ],
  "newMessage": "..."
}

Server request payload

The proxy server maps the incoming chat history into the upstream API format:

  • each user turn becomes a role: "user" message
  • each assistant turn becomes a role: "assistant" message
  • the new user prompt is appended as the final message
  • message items include an id, createdAt, content, and parts
  • selectedChatModelId is set to gpt-4o-mini

Response processing

The upstream API returns text lines. The proxy extracts lines starting with 0: and strips escaped strings to return clean assistant text.


Deployment Notes

Vercel

This project is already configured for deployment on Vercel using Server/index.js as the entrypoint.

  • Server/vercel.json controls build and route configuration
  • @vercel/node is used for serverless deployment

Local deployment

To run locally, add a listen wrapper or use a simple server start script, then start the server with node server.js or a similar file.

If the Python client must point to a local server, update the URL in Application/chatbot.py:

url = "http://localhost:3000/api/chat"

Notes & Caveats

  • This repository does not use an official OpenAI API key. The proxy forwards chat messages to a public endpoint at https://chatplus.com/api/chat.
  • The server sets a daily rate limit of 1000 requests.
  • The current server code is tailored for deployment as a serverless app and may require a small wrapper for local node index.js execution.
  • Response formatting includes escape sequence normalization so that output displays clean text in the terminal.

Recommended Improvements

If you want to enhance this project, consider:

  • adding a local app.listen(...) startup entrypoint for Node
  • making the client endpoint configurable via environment variables
  • persisting chat history to disk or a database
  • adding authentication or abuse protection to the proxy
  • adding more robust parsing for upstream API responses

License

This project does not include a license file. If you want to publish it, add a LICENSE or LICENSE.md file.

Disclaimer

This API is provided for educational purposes only.

By using this API, you agree to the following:

  • No abuse – You will not overload, exploit, or misuse this API in any way, including but not limited to automated scraping, denial‑of‑service attacks, or unauthorized data extraction.

  • No liability – The creator of this API is not responsible for any damage, loss, legal consequences, data breaches, system failures, or any other issues resulting from the use or misuse of this API.

  • Use at your own risk – This API is offered “as is” without any warranty or guarantee of availability, accuracy, or security.

  • Educational only – This API is intended solely for learning, testing, and personal development. It is not approved for production or commercial use.

  • Compliance – You are solely responsible for complying with all applicable laws and regulations when using this API.


Build with ❤ by @FurqanAhmadKhan

Star it if you find it useful

About

Reverse-engineered GPT-5-mini API proxy + Python chatbot. IP rotation, rate limiting, Vercel-ready.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors