PeerCat MCP Server
Connect any MCP-compatible LLM or AI application to PeerCat's AI capabilities using the Model Context Protocol.
Important: All Endpoints Disabled by Default
For your protection, all endpoints are disabled by default. LLMs can be unpredictable and might make expensive API calls without your consent. You must explicitly enable each endpoint and model you want to use, and set spending limits before the MCP server will make any requests.
What is MCP?
The Model Context Protocol (MCP) is an open standard that allows AI applications to connect to external tools and data sources. The PeerCat MCP server enables any MCP-compatible LLM or AI agent to access PeerCat's image generation, text AI, and research capabilities.
MCP is supported by a growing ecosystem of AI tools including Claude Desktop, Claude Code, Cursor, Windsurf, and many other LLM-powered applications. Any application that implements the MCP standard can connect to PeerCat's services through this server.
Installation
1. Install the MCP Server
npm install -g @peercat/mcp-server2. Get your API Key
Create an account at PeerCat and generate an API key from your dashboard.
3. Configure Your MCP Client
Add the PeerCat MCP server to your client's configuration. The exact file location varies by application:
- • Claude Desktop:
claude_desktop_config.json - • Claude Code:
.claude/settings.jsonor via/mcpcommand - • Cursor:
.cursor/mcp.json - • Other clients: Check your application's MCP configuration documentation
| 1 | { |
| 2 | "mcpServers": { |
| 3 | "peercat": { |
| 4 | "command": "npx", |
| 5 | "args": ["@peercat/mcp-server"], |
| 6 | "env": { |
| 7 | "PEERCAT_API_KEY": "your-api-key" |
| 8 | }, |
| 9 | "config": { |
| 10 | "endpoints": { |
| 11 | "image": { |
| 12 | "enabled": false, |
| 13 | "models": { |
| 14 | "flux-1.1-pro": false, |
| 15 | "flux-dev": false, |
| 16 | "flux-schnell": false, |
| 17 | "sdxl": false, |
| 18 | "dall-e-3": false, |
| 19 | "imagen-4": false |
| 20 | } |
| 21 | }, |
| 22 | "text": { |
| 23 | "enabled": false, |
| 24 | "models": { |
| 25 | "claude-opus-4.5": false, |
| 26 | "claude-sonnet-4.5": false, |
| 27 | "claude-haiku-4.5": false, |
| 28 | "gpt-4o": false, |
| 29 | "gpt-4o-mini": false, |
| 30 | "gemini-2.5-flash": false |
| 31 | } |
| 32 | }, |
| 33 | "research": { |
| 34 | "enabled": false |
| 35 | } |
| 36 | }, |
| 37 | "spending": { |
| 38 | "dailyLimit": 0, |
| 39 | "weeklyLimit": 0, |
| 40 | "alertAt": 0, |
| 41 | "hardStop": true |
| 42 | } |
| 43 | } |
| 44 | } |
| 45 | } |
| 46 | } |
4. Enable the endpoints you need
Explicitly enable only the models you want your LLM to use:
{
"endpoints": {
"image": {
"enabled": true,
"models": {
"flux-dev": true,
"flux-schnell": true,
"sdxl": true
}
},
"text": {
"enabled": true,
"models": {
"claude-haiku-4.5": true,
"gpt-4o-mini": true,
"gemini-2.5-flash": true
}
}
},
"spending": {
"dailyLimit": 10,
"weeklyLimit": 50,
"alertAt": 40,
"hardStop": true
}
}Configuration Reference
| Setting | Default | Description |
|---|---|---|
| endpoints.image.enabled | false | Master toggle for image generation |
| endpoints.image.models.* | false | Individual image model toggles |
| endpoints.text.enabled | false | Master toggle for text/chat AI |
| endpoints.text.models.* | false | Individual text model toggles |
| endpoints.research.enabled | false | Toggle for deep research |
| spending.dailyLimit | 0 | Max daily spend in USD (0 = disabled) |
| spending.weeklyLimit | 0 | Max weekly spend in USD |
| spending.alertAt | 0 | Alert when spend reaches this amount |
| spending.hardStop | true | Stop all calls when limit reached |
Why All Disabled by Default?
Prevent Unexpected Costs
LLMs can be unpredictable. An AI assistant might decide to generate 100 images or make expensive API calls to Opus 4.5 without your explicit consent. Disabled by default protects you.
Granular Model Control
Enable only the models you need. Keep expensive models like Opus 4.5 disabled while allowing cheaper alternatives like Haiku. Full control over what your LLM can access.
Spending Limits
Set daily and weekly spending limits. When hardStop: true, the MCP server immediately blocks all calls when limits are reached, not just warns.
Explicit Opt-In
You must consciously decide which capabilities to enable. This ensures you understand and accept the potential costs before any LLM can make API calls through PeerCat.