MCP (Model Context Protocol) server that exposes Perplexity AI search, research, and reasoning capabilities as tools.
This MCP server uses your Perplexity account session directly β no API key needed.
Perplexity offers a separate paid API with per-request pricing that is charged independently from your Pro subscription. With this MCP, you don't need to pay for API access β your existing Perplexity subscription (or even a free account) is enough.
Simply extract the session tokens from your browser cookies, and you're ready to use Perplexity search, research, and reasoning in your IDE.
The server can run without any authentication tokens. In this mode:
- Only
perplexity_search(links only) andperplexity_ask(answer with sources) are available βperplexity_researchandperplexity_reasonrequire tokens. - Both tools use the
turbomodel;PERPLEXITY_ASK_MODELandPERPLEXITY_REASON_MODELcannot be set (the server will throw an error if they are). - File attachments (
filesparameter) are unavailable β they require tokens.
To use tokenless mode, simply omit PERPLEXITY_SESSION_TOKEN and PERPLEXITY_CSRF_TOKEN from your configuration.
For full access to all tools and model selection, provide both tokens as described in the Configuration section below.
- macOS (arm64, x86_64)
- Linux (x86_64, aarch64)
- Windows (x86_64)
This server requires a Perplexity AI account. You need to extract two authentication tokens from your browser cookies:
- Log in to perplexity.ai in your browser
- Open Developer Tools (F12 or right-click β Inspect)
- Go to Application β Cookies β
https://www.perplexity.ai - Copy the values of:
__Secure-next-auth.session-tokenβ use asPERPLEXITY_SESSION_TOKENnext-auth.csrf-tokenβ use asPERPLEXITY_CSRF_TOKEN
PERPLEXITY_SESSION_TOKEN(optional): Perplexity session token (next-auth.session-tokencookie). Required forperplexity_research,perplexity_reason, and file attachments.PERPLEXITY_CSRF_TOKEN(optional): Perplexity CSRF token (next-auth.csrf-tokencookie). Required forperplexity_research,perplexity_reason, and file attachments.PERPLEXITY_ASK_MODEL(optional, requires tokens): Model forperplexity_ask. Valid values:turbo(default for tokenless)pro-auto(default for authenticated)sonargpt-5.4claude-4.6-sonnetnemotron-3-super
PERPLEXITY_REASON_MODEL(optional, requires tokens): Model forperplexity_reason. Valid values:gemini-3.1-pro(default)gpt-5.4-thinkingclaude-4.6-sonnet-thinking
PERPLEXITY_INCOGNITO(optional, default:true): Whether requests should use Perplexity's incognito mode. Valid values:trueorfalse
claude mcp add perplexity --env PERPLEXITY_SESSION_TOKEN="your-session-token" --env PERPLEXITY_CSRF_TOKEN="your-csrf-token" -- npx -y perplexity-web-api-mcpI recommend using the one-click install badge at the top of this README for Cursor.
For manual setup, all these clients use the same mcpServers format:
| Client | Config File |
|---|---|
| Cursor | ~/.cursor/mcp.json |
| Claude Desktop | claude_desktop_config.json |
| Windsurf | ~/.codeium/windsurf/mcp_config.json |
{
"mcpServers": {
"perplexity": {
"command": "npx",
"args": ["-y", "perplexity-web-api-mcp"],
"env": {
"PERPLEXITY_SESSION_TOKEN": "your-session-token",
"PERPLEXITY_CSRF_TOKEN": "your-csrf-token"
}
}
}
}Add following following to context_servers in your settings file:
{
"context_servers": {
"perplexity": {
"command": "npx",
"args": ["-y", "perplexity-web-api-mcp"],
"env": {
"PERPLEXITY_SESSION_TOKEN": "your-session-token",
"PERPLEXITY_CSRF_TOKEN": "your-csrf-token"
}
}
}
}I recommend using the one-click install badge at the top of this README for VS Code, or for manual setup, add to .vscode/mcp.json:
{
"servers": {
"perplexity": {
"type": "stdio",
"command": "npx",
"args": ["-y", "perplexity-web-api-mcp"],
"env": {
"PERPLEXITY_SESSION_TOKEN": "your-session-token",
"PERPLEXITY_CSRF_TOKEN": "your-csrf-token"
}
}
}
}codex mcp add perplexity --env PERPLEXITY_SESSION_TOKEN="your-session-token" --env PERPLEXITY_CSRF_TOKEN="your-csrf-token" -- npx -y perplexity-web-api-mcpSource build instructions, including optional cargo features, are documented in CONTRIBUTING.md.
Most clients can be manually configured to use the mcpServers wrapper in their configuration file (like Cursor). If your client doesn't work, check its documentation for the correct wrapper format.
A pre-built multi-arch image (linux/amd64, linux/arm64) is available on Docker Hub:
docker run -d \
-p 8080:8080 \
-e PERPLEXITY_SESSION_TOKEN="your-session-token" \
-e PERPLEXITY_CSRF_TOKEN="your-csrf-token" \
mishamyrt/perplexity-web-api-mcpThe container exposes the MCP server via Streamable HTTP at http://localhost:8080/mcp.
The Docker image is built with --features streamable-http; local/source builds need the same feature if you want HTTP transport.
Configure your MCP client to connect:
{
"mcpServers": {
"perplexity": {
"url": "http://localhost:8080/mcp"
}
}
}| Variable | Default | Description |
|---|---|---|
MCP_TRANSPORT |
streamable-http |
Transport mode. stdio or streamable-http (requires the streamable-http cargo feature) |
MCP_HOST |
0.0.0.0 |
Host address to bind |
MCP_PORT |
8080 |
Port to listen on |
The authentication tokens, model variables, and incognito flag described above work the same way in Docker.
Quick web search using the turbo model. Returns only links, titles, and snippets β no generated answer.
Best for: Finding relevant URLs and sources quickly.
Parameters:
query(required): The search query or questionsources(optional): Array of sources β"web","scholar","social". Defaults to["web"]language(optional): Language code, e.g.,"en-US". Defaults to"en-US"
File attachments are not supported by this tool.
Ask Perplexity AI a question and get a comprehensive answer with source citations. By default uses the best model (Pro auto mode) when authentication tokens are provided, or turbo in tokenless mode. Can be configured via PERPLEXITY_ASK_MODEL.
Best for: Getting detailed answers to questions with web context.
Parameters: Same as perplexity_search, plus:
files(optional, requires tokens): Array of file attachments for document analysis. See File Attachments.
Advanced reasoning and problem-solving. By default uses Perplexity's sonar-reasoning model, but can be configured via PERPLEXITY_REASON_MODEL.
Best for: Logical problems, complex analysis, decision-making, and tasks requiring step-by-step reasoning.
Parameters: Same as perplexity_ask.
Deep, comprehensive research using Perplexity's sonar-deep-research (pplx_alpha) model.
Best for: Complex topics requiring detailed investigation, comprehensive reports, and in-depth analysis. Provides thorough analysis with citations.
Parameters: Same as perplexity_ask.
perplexity_ask, perplexity_research, and perplexity_reason accept an optional files parameter for document analysis. Requires authentication tokens.
Each entry in the files array must have:
filename(required): Filename with extension, e.g."report.pdf"or"notes.txt"text(mutually exclusive withdata): Plain-text file content. Use for.txt,.md,.csv,.json, source code, etc.data(mutually exclusive withtext): Base64-encoded binary content. Use for.pdf,.docx, images, etc.
Example β plain text:
{
"query": "Summarise the key points",
"files": [
{
"filename": "notes.txt",
"text": "Meeting notes: Q1 revenue up 12%..."
}
]
}Example β binary file (PDF):
{
"query": "What does this contract say about termination?",
"files": [
{
"filename": "contract.pdf",
"data": "JVBERi0xLjQK..."
}
]
}Multiple files can be passed in a single request β they are uploaded to Perplexity's storage in parallel before the query is sent.
perplexity_search returns only web results:
{
"web_results": [
{
"name": "Source name",
"url": "https://example.com",
"snippet": "Source snippet"
}
]
}perplexity_ask, perplexity_research, and perplexity_reason return a full response:
{
"answer": "The generated answer text...",
"web_results": [
{
"name": "Source name",
"url": "https://example.com",
"snippet": "Source snippet"
}
],
"follow_up": {
"backend_uuid": "uuid-for-follow-up-queries",
"attachments": []
}
}MIT