A reverse API proxy for accessing ChatGPT models through third-party services. This project provides a simple interface to interact with various GPT models without requiring official OpenAI API keys.
β οΈ DISCLAIMER: This project is strictly for educational and testing purposes only.It is not actively maintained and should not be used in production environments. The service may be discontinued at any time without notice.
Last Updated: March 28, 2026 at 5:00 PM (Sunday)
Tested: March 22, 2026 - See test_results.md for current model status
- Original Website
- Supported Models
- API Endpoint
- Installation
- Usage Examples
- cURL Commands for Each Model
- Implementation Files
- Response Format
- Legal Notice
- Limitations
Service URL: https://chatgpt-api.vercel.app/
API Endpoint: https://chatgpt-api.vercel.app/api/chat
Test Online: https://reqbin.com/trjnxxyn - Try the API directly in your browser without any setup
The following models are available through this proxy. Operational status is verified as of the last update.
| Model Name | Status | Description |
|---|---|---|
gpt-4o |
β Operational | GPT-4 Optimized - Latest optimized version |
o3 |
β Not Working | O-series model version 3 |
o4-mini |
β Operational | Lightweight O-series model version 4 |
gpt-4.1 |
β Operational | GPT-4.1 - Enhanced version |
gpt-4.1-mini |
β Operational | GPT-4.1 Mini - Lightweight version |
gpt-4-turbo |
β Not Working | GPT-4 Turbo - Faster processing |
gpt-4 |
β Not Working | GPT-4 - Standard version |
gpt-3.5-turbo |
β Operational | GPT-3.5 Turbo - Fast and efficient |
gpt-3.5-turbo-16k |
β Operational | GPT-3.5 Turbo - Extended context (16k tokens) |
Note: Models marked with β worked in the past but are currently experiencing timeout issues. They may become available again in future updates. See test_results.md for detailed testing information.
pip install requestsnpm install node-fetch
# or
npm install axiosThe API accepts POST requests with a JSON payload containing the model name and conversation messages.
Basic Request Structure:
{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "Your question here"
}
]
}curl -X POST https://chatgpt-api.vercel.app/api/chat \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [
{
"role": "user",
"content": "Explain quantum computing in simple terms"
}
]
}'curl -X POST https://chatgpt-api.vercel.app/api/chat \
-H "Content-Type: application/json" \
-d '{
"model": "o3",
"messages": [
{
"role": "user",
"content": "What are the benefits of machine learning?"
}
]
}'curl -X POST https://chatgpt-api.vercel.app/api/chat \
-H "Content-Type: application/json" \
-d '{
"model": "o4-mini",
"messages": [
{
"role": "user",
"content": "Write a haiku about programming"
}
]
}'curl -X POST https://chatgpt-api.vercel.app/api/chat \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4.1",
"messages": [
{
"role": "user",
"content": "Explain the theory of relativity"
}
]
}'curl -X POST https://chatgpt-api.vercel.app/api/chat \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4.1-mini",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'curl -X POST https://chatgpt-api.vercel.app/api/chat \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4-turbo",
"messages": [
{
"role": "user",
"content": "Generate a Python function to sort a list"
}
]
}'curl -X POST https://chatgpt-api.vercel.app/api/chat \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4",
"messages": [
{
"role": "user",
"content": "Describe the water cycle"
}
]
}'curl -X POST https://chatgpt-api.vercel.app/api/chat \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "Hello, how are you?"
}
]
}'curl -X POST https://chatgpt-api.vercel.app/api/chat \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-3.5-turbo-16k",
"messages": [
{
"role": "user",
"content": "Summarize a long document about climate change"
}
]
}'See chatgpt_proxy.py for a complete Python implementation with support for all models.
Quick Example:
from chatgpt_proxy import ChatGPTProxy
# Initialize the proxy
proxy = ChatGPTProxy()
# Send a message
response = proxy.chat("gpt-3.5-turbo", "Hello, how are you?")
print(response)See chatgpt_proxy.js for a complete JavaScript/Node.js implementation with support for all models.
Quick Example:
const ChatGPTProxy = require('./chatgpt_proxy');
// Initialize the proxy
const proxy = new ChatGPTProxy();
// Send a message
proxy.chat('gpt-3.5-turbo', 'Hello, how are you?')
.then(response => console.log(response));The API returns responses in OpenAI's standard chat completion format:
{
"id": "chatcmpl-DMBo5lF78IrP8ImDqNBCXmkSRdL06",
"object": "chat.completion",
"created": 1774180973,
"model": "gpt-3.5-turbo-0125",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! I'm just a computer program, so I don't have feelings, but I'm here and ready to help with anything you need. How can I assist you today?",
"refusal": null,
"annotations": []
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 12,
"completion_tokens": 36,
"total_tokens": 48,
"prompt_tokens_details": {
"cached_tokens": 0,
"audio_tokens": 0
},
"completion_tokens_details": {
"reasoning_tokens": 0,
"audio_tokens": 0,
"accepted_prediction_tokens": 0,
"rejected_prediction_tokens": 0
}
},
"service_tier": "default",
"system_fingerprint": null
}- id: Unique identifier for the completion
- object: Type of object returned (chat.completion)
- created: Unix timestamp of creation
- model: Model used for generation
- choices: Array of completion choices
- message: The generated message
- role: Role of the message sender (assistant)
- content: The actual response text
- finish_reason: Reason for completion (stop, length, etc.)
- message: The generated message
- usage: Token usage statistics
- prompt_tokens: Tokens in the prompt
- completion_tokens: Tokens in the completion
- total_tokens: Total tokens used
IMPORTANT: By using this proxy service, you acknowledge and agree to the following terms:
-
Educational Purpose Only: This project is intended solely for educational, research, and testing purposes. It is not designed or intended for commercial use or production environments.
-
No Warranty: This software is provided "AS IS" without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, and noninfringement.
-
Third-Party Service: This proxy relies on third-party services that are not affiliated with, endorsed by, or connected to OpenAI. The availability, reliability, and functionality of this service are entirely dependent on the third-party provider.
-
No Official Support: This project is not maintained or supported by OpenAI or any official entity. There is no guarantee of updates, bug fixes, or continued availability.
-
Rate Limiting: The service may implement rate limiting, throttling, or other restrictions at any time without notice.
-
Data Privacy: Be cautious about sending sensitive, personal, or confidential information through this proxy. The data may be processed by third-party services with their own privacy policies.
-
Compliance: Users are responsible for ensuring their use of this service complies with all applicable laws, regulations, and terms of service of relevant parties.
-
Liability: The creators and contributors of this project shall not be held liable for any damages, losses, or issues arising from the use or inability to use this service.
-
OpenAI Terms: This project does not grant you any rights under OpenAI's terms of service. Users should review and comply with OpenAI's official terms and policies.
-
Discontinuation: This service may be discontinued at any time without prior notice or explanation.
- OpenAI, ChatGPT, GPT-3.5, GPT-4, and related trademarks are the property of OpenAI, L.P.
- This project is not affiliated with, endorsed by, or sponsored by OpenAI
- All model names and references are used for identification purposes only
Users of this proxy service are expected to:
- Use the service ethically and responsibly
- Not use the service for illegal activities
- Not attempt to abuse, exploit, or overload the service
- Respect the intellectual property rights of others
- Not use the service to generate harmful, misleading, or malicious content
By using this service, you agree to indemnify and hold harmless the project creators, contributors, and maintainers from any claims, damages, losses, liabilities, and expenses arising from your use of the service.
Based on recent testing (March 22, 2026):
- 6 out of 9 models are fully operational
- 3 models (gpt-4, gpt-4-turbo, o3) are experiencing timeout errors
- Operational models include: gpt-4o, gpt-4.1, gpt-4.1-mini, o4-mini, gpt-3.5-turbo, gpt-3.5-turbo-16k
For detailed test results, see test_results.md.
- Availability: Service uptime depends entirely on third-party infrastructure
- Rate Limits: Requests may be throttled without warning
- Model Availability: Some models may timeout or become unavailable
- Response Quality: May differ from official OpenAI API responses
- No Streaming: Real-time streaming responses are not supported
- Context Windows: Actual context limits may vary from official specifications
- No Authentication: No API key management or user authentication system
- No SLA: Zero guarantees on uptime, performance, or support
- Do not send sensitive or confidential information
- Do not use for applications requiring high security
- Be aware that requests may be logged by third-party services
- No encryption guarantees beyond standard HTTPS
- Response times may be slower than official API
- Concurrent request handling may be limited
- No guaranteed throughput or latency
Service Unavailable (503)
The backend service is down. Wait a few minutes and retry.
Rate Limit Exceeded (429)
Too many requests in a short period. Implement exponential backoff in your retry logic.
Timeout Errors (FUNCTION_INVOCATION_TIMEOUT)
Certain models (gpt-4, gpt-4-turbo, o3) are currently experiencing deployment timeouts. Try using alternative models like gpt-4o or gpt-4.1 instead.
Invalid Model Error
Double-check the model name spelling. Refer to the supported models table for exact names.
Connection Timeouts
Network issues or slow responses. Consider increasing your client timeout settings or using a lighter model.
This project is not actively maintained. For issues or questions:
- Check the troubleshooting section above
- Review the implementation files for examples
- Verify the original service is operational
This project is provided as-is for educational and testing purposes only. No license is granted for commercial use.
- Original service provider: https://chatgpt-api.vercel.app/
- OpenAI for developing the underlying models
- The open-source community
Last Updated: March 22, 2026 at 5:00 PM (Sunday)
Version: 1.0.0
Status: Experimental / Not Maintained
Remember: This is an unofficial proxy service for testing purposes only. For production use, please use the official OpenAI API.
Created with β€οΈ by FurqanAhmadKhan
β Don't forget to star the repo if you find it useful!
