Skip to content

Configuration — MCP (Model Context Protocol)

Buggregator includes a built-in MCP server that lets AI assistants query your debugging data directly. Connect Claude Code, Cursor, or any MCP-compatible client to browse events, inspect errors, analyze profiling data, and read variable dumps — without leaving your editor.

Enabling MCP

MCP is disabled by default. Enable it in buggregator.yaml:

yaml
mcp:
  enabled: true
mcp:
  enabled: true

Or via environment variable:

bash
MCP_ENABLED=true
MCP_ENABLED=true

Transport Modes

Unix Socket (Default)

The server listens on a Unix domain socket. This is the recommended mode for local development.

yaml
mcp:
  enabled: true
  transport: socket
  socket_path: /tmp/buggregator-mcp.sock
mcp:
  enabled: true
  transport: socket
  socket_path: /tmp/buggregator-mcp.sock
VariableDefaultDescription
MCP_TRANSPORTsocketTransport type
MCP_SOCKET_PATH/tmp/buggregator-mcp.sockUnix socket path

HTTP/SSE

For remote access or when Unix sockets are not available, use HTTP transport:

yaml
mcp:
  enabled: true
  transport: http
  addr: ":8001"
  auth_token: my-secret-token     # Optional bearer token
mcp:
  enabled: true
  transport: http
  addr: ":8001"
  auth_token: my-secret-token     # Optional bearer token
VariableDefaultDescription
MCP_TRANSPORTsocketSet to http
MCP_ADDR:8001HTTP listen address
MCP_AUTH_TOKENBearer token for authentication (optional)

When MCP_AUTH_TOKEN is set, all requests must include the Authorization: Bearer <token> header.

Integration with AI Assistants

Claude Code

Add to your .mcp.json or configure via Claude Code settings:

json
{
  "mcpServers": {
    "buggregator": {
      "command": "./buggregator",
      "args": ["mcp"]
    }
  }
}
{
  "mcpServers": {
    "buggregator": {
      "command": "./buggregator",
      "args": ["mcp"]
    }
  }
}

The buggregator mcp subcommand bridges stdio to the running Buggregator instance via the Unix socket.

Note: Make sure Buggregator is running with MCP enabled before connecting.

If Buggregator runs in Docker, you can use the HTTP transport instead:

json
{
  "mcpServers": {
    "buggregator": {
      "type": "url",
      "url": "http://localhost:8001/mcp"
    }
  }
}
{
  "mcpServers": {
    "buggregator": {
      "type": "url",
      "url": "http://localhost:8001/mcp"
    }
  }
}

Cursor

In Cursor settings, add the MCP server:

  • Command: ./buggregator mcp
  • Or use the HTTP URL: http://localhost:8001/mcp

Docker Setup

When running Buggregator in Docker with MCP enabled:

yaml
services:
  buggregator:
    image: ghcr.io/buggregator/server:latest
    ports:
      - 127.0.0.1:8000:8000
      - 127.0.0.1:8001:8001    # MCP HTTP port
    environment:
      MCP_ENABLED: "true"
      MCP_TRANSPORT: http
      MCP_ADDR: ":8001"
services:
  buggregator:
    image: ghcr.io/buggregator/server:latest
    ports:
      - 127.0.0.1:8000:8000
      - 127.0.0.1:8001:8001    # MCP HTTP port
    environment:
      MCP_ENABLED: "true"
      MCP_TRANSPORT: http
      MCP_ADDR: ":8001"

Available Tools

MCP clients can use the following tools to query Buggregator data:

Event Management

ToolDescription
events_listList events with optional filtering by type and project. Returns metadata (uuid, type, timestamp, project) without payloads. Supports limit parameter (default 20, max 100).
event_getGet a complete event by UUID, including the full payload.
event_deleteDelete an event by UUID.

Sentry

ToolDescription
sentry_eventGet structured details of a Sentry error: message, severity, exception chain with stack traces, environment, platform. Returns clean, AI-friendly data.

VarDumper

ToolDescription
vardump_getGet a variable dump with HTML stripped for clean AI consumption. Returns variable type, label, and plain text representation.

Profiler

ToolDescription
profiler_summaryQuick overview: total CPU/wall time/memory, slowest function, biggest memory consumer, most called function.
profiler_topTop functions sorted by metric (cpu, wt, mu, pmu, ct, and their exclusive variants). Returns inclusive and exclusive metrics with percentages.
profiler_call_graphFiltered call graph showing function relationships. Use threshold and percentage to control which nodes are shown.

Example Usage in Claude Code

Once connected, you can ask your AI assistant things like:

  • "Show me the latest Sentry errors"
  • "What's the stack trace for this exception?"
  • "Analyze the profiling data and find the slowest functions"
  • "What value was dumped in the last VarDumper event?"