MindRelay is a macOS Menu Bar and CLI application that serves as an OpenAI-compatible API relay for Apple's Foundation Models (Apple Intelligence). It allows you to use your Mac's built-in AI capabilities with any client or tool that supports the OpenAI API format.
- OpenAI Compatible: Seamlessly integrates with tools like Chatwise and other AI clients using
/v1/chat/completions. - Privacy First: All processing happens locally on your Mac using Apple Intelligence. No data leaves your device.
- Efficient Resource Management: Automatically loads and unloads models based on activity to save memory and battery.
- Customizable: Configure the server port, and bind address.
- Menu Bar Native: Minimalist design that stays out of your way.
You can get MindRelay in two ways:
brew tap pruizlezcano/tap
brew install --cask mindrelayDownload the latest DMG or ZIP directly from the Releases page.
Using MindRelay with Chatwise to generate code and content locally.
- macOS Tahoe (26.0+)
- Apple Intelligence enabled and available on your device.
MindRelay automatically starts the API server when launched. You can check the status and copy the server URL directly from the Menu Bar.
Go to Settings from the Menu Bar to configure:
- Port: Change the port if
11343is in use. - Bind Address: Choose between
localhostor0.0.0.0for local or network access. - Model Unload Timeout: Set how long the model stays in memory after the last request.
GET /health: Check server status.GET /v1/models: List available models (currently returns a static list withapple-foundation-model).GET /v1/models/{model_id}: Get details about a specific model.POST /v1/chat/completions: Main endpoint for generating chat completions using Apple Intelligence.
curl http://localhost:11343/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "apple-foundation-model",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
}'MindRelay includes an integrated CLI for direct interactions and automation.
- Auto with Homebrew: If you installed MindRelay via Homebrew, the
mindrelayCLI tool is automatically linked and ready to use in your terminal. - Manual Installation: If you downloaded the app manually, you can install the CLI tool directly from the app's Settings.
Prompt Send a direct prompt to the model and get the entire response at once:
mindrelay "What is the capital of France?"Stream Prompts
Stream the response continuously to stdout as it's generated:
mindrelay --stream "Write a short poem about space."Pipe
Pipe stdin directly into the AI for processing or summarizing:
echo "Hello" | mindrelayServer Start the OpenAI-compatible HTTP server directly from the terminal (useful for headless environments or automation):
mindrelay --serve| Option | Description |
|---|---|
--serve |
Start the OpenAI-compatible HTTP server. |
--stream |
Stream a direct prompt response to stdout. |
--port <port> |
Port to use with --serve. |
--addr <address> |
Bind address to use with --serve. Accepts localhost, lan, all, or any valid IP address. |
--model-unload-timeout <sec> |
Model auto-unload timeout in seconds to use with --serve. |
-v, --version |
Show the MindRelay CLI version. |
Note: If no configuration options are provided (like
--port,--addr, or--model-unload-timeout), the CLI will automatically use the defaults configured in the MindRelay app settings.
If you wish to build and run MindRelay from source for development purposes:
- Clone the repository:
git clone https://github.com/pruizlezcano/MindRelay.git - Open
MindRelay.xcodeprojin Xcode. - Select your development team in Signing & Capabilities.
- Press
Cmd + Rto Build and Run.
All data processing happens locally on your Mac using Apple Intelligence. MindRelay does not send any data to external servers. No user data is collected or stored by MindRelay.
- Server Not Starting: Ensure no other application is using port 11343. Change the port in Settings if needed.
- Model Not Loading: Check that Apple Intelligence is enabled and available on your Mac.
- API Errors: Use the
/healthendpoint to check server status and review logs for detailed error messages. - You can see all the logs from the Settings -> About section.
For support or feature suggestions, visit the GitHub Issues page.

