Skip to main content
DeployStack Satellite is an edge worker service that manages per-user MCP server instances with dual deployment support: HTTP proxy for external endpoints and stdio subprocess for local MCP servers. Each team member gets their own isolated instance with merged configuration (Template + Team + User).

Technical Overview

Edge Worker Pattern

Satellites operate as edge workers similar to GitHub Actions runners, providing:
  • MCP Transport Protocols: SSE, Streamable HTTP, Direct HTTP communication
  • Per-User Instance Management: Each team member has their own MCP server instance (implemented)
  • Dual MCP Server Management: HTTP proxy + stdio subprocess support (implemented)
  • Team and User Isolation: Per-user process isolation with independent status tracking (implemented)
  • OAuth 2.1 Resource Server: Token introspection with Backend for team and user context (implemented)
  • Backend Polling Communication: Outbound-only, firewall-friendly
  • Real-Time Event System: Immediate satellite → backend event emission with automatic batching
  • Process Lifecycle Management: Per-user spawn, monitor, terminate with independent lifecycles (implemented)
  • Background Jobs System: Cron-like recurring tasks with automatic error handling

Current Implementation Architecture

MCP SDK Transport Layer

The satellite uses the official @modelcontextprotocol/sdk for all MCP client communication:
┌─────────────────────────────────────────────────────────────────────────────────┐
│                        Official MCP SDK Implementation                         │
│                                                                                 │
│  ┌─────────────────────────────────────────────────────────────────────────┐   │
│  │                        MCP SDK Server                                   │   │
│  │                                                                         │   │
│  │  • StreamableHTTPServerTransport    • Standard JSON-RPC handling       │   │
│  │  • Automatic session management     • Built-in error responses         │   │
│  │  • Protocol 2025-03-26 compliance   • SSE streaming support            │   │
│  └─────────────────────────────────────────────────────────────────────────┘   │
│                                                                                 │
│  ┌─────────────────────────────────────────────────────────────────────────┐   │
│  │                     MCP Client Integration                              │   │
│  │                                                                         │   │
│  │  • StreamableHTTPClientTransport    • External server discovery        │   │
│  │  • Automatic connection cleanup     • Tool discovery caching           │   │
│  │  • Standard MCP method support      • Process communication            │   │
│  └─────────────────────────────────────────────────────────────────────────┘   │
│                                                                                 │
│  ┌─────────────────────────────────────────────────────────────────────────┐   │
│  │                    Foundation Infrastructure                            │   │
│  │                                                                         │   │
│  │  • Fastify HTTP Server with JSON Schema validation                     │   │
│  │  • Pino structured logging with operation tracking                     │   │
│  │  • TypeScript + Webpack build system                                   │   │
│  │  • Environment configuration with .env support                        │   │
│  └─────────────────────────────────────────────────────────────────────────┘   │
└─────────────────────────────────────────────────────────────────────────────────┘

MCP Transport Endpoints

Active Endpoints:
  • GET /mcp - Establish SSE stream via MCP SDK
  • POST /mcp - Send JSON-RPC messages via MCP SDK
  • DELETE /mcp - Session termination via MCP SDK
Transport Protocol Support:
MCP Client                    Satellite (MCP SDK)
    │                            │
    │──── POST /mcp ────────────▶│  (Initialize connection)
    │                            │
    │◀─── Session headers ──────│  (Session established)
    │                            │
    │──── POST /mcp ────────────▶│  (JSON-RPC tools/list)
    │                            │
    │◀─── 2 meta-tools ─────────│  (Hierarchical router)

Core SDK Components

MCP Server Wrapper:
  • Official SDK Server integration with Fastify
  • Standard MCP protocol method handlers
  • Automatic session and transport management
  • Integration with existing tool discovery and process management
Client Communication:
  • StreamableHTTPClientTransport for external server communication
  • Automatic connection establishment and cleanup
  • Standard MCP method execution (listTools, callTool)
  • Built-in error handling and retry logic

MCP Protocol Implementation

Supported MCP Methods:
  • initialize - MCP session initialization (SDK automatic)
  • notifications/initialized - Client initialization complete
  • tools/list - List available meta-tools (hierarchical router: 2 tools only)
  • tools/call - Execute meta-tools or route to actual MCP servers
  • resources/list - List available resources (returns empty array)
  • resources/templates/list - List resource templates (returns empty array)
  • prompts/list - List available prompts (returns empty array)
Hierarchical Router: The satellite exposes only 2 meta-tools to MCP clients (discover_mcp_tools and execute_mcp_tool) instead of all available tools. This solves the MCP context window consumption problem by reducing token usage by 95%+. See Hierarchical Router Implementation for details.
For detailed information about internal tool discovery and caching, see Tool Discovery Implementation. Error Handling:
  • Standard JSON-RPC 2.0 compliant error responses via SDK
  • Automatic HTTP status code mapping
  • Structured error logging with operation tracking
  • Built-in session validation and error reporting

Planned Full Architecture

Three-Tier System Design

┌─────────────────────────────────────────────────────────────────────────────────┐
│                        MCP Client Layer                                        │
│                     (VS Code, Claude, etc.)                                    │
│                                                                                 │
│  Connects via: SSE, Streamable HTTP, Direct HTTP Tools                        │
└─────────────────────────────────────────────────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────────────────────────┐
│                      Satellite Layer                                           │
│                   (Edge Processing)                                            │
│                                                                                 │
│  ┌─────────────────────────────────────────┐                                   │
│  │        Global Satellite                 │                                   │
│  │  (Operated by DeployStack Team)         │                                   │
│  │      (Serves All Teams)                 │                                   │
│  └─────────────────────────────────────────┘                                   │
│                                                                                 │
│  ┌─────────────────────────────────────────┐                                   │
│  │        Team Satellite                   │                                   │
│  │   (Customer-Deployed)                   │                                   │
│  │   (Serves Single Team)                  │                                   │
│  └─────────────────────────────────────────┘                                   │
└─────────────────────────────────────────────────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────────────────────────┐
│                       Backend Layer                                            │
│                  (Central Management)                                          │
│                                                                                 │
│  ┌─────────────────────────────────────────────────────────────────────────┐   │
│  │                    DeployStack Backend                                  │   │
│  │                  (cloud.deploystack.io)                                │   │
│  │                                                                         │   │
│  │  • Command orchestration    • Configuration management                 │   │
│  │  • Status monitoring        • Team & role management                   │   │
│  │  • Usage analytics          • Security & compliance                    │   │
│  └─────────────────────────────────────────────────────────────────────────┘   │
└─────────────────────────────────────────────────────────────────────────────────┘

Satellite Internal Architecture (Planned)

Each satellite instance will contain five core components:
┌─────────────────────────────────────────────────────────────────┐
│                    Satellite Instance                           │
│                                                                 │
│  ┌─────────────────┐    ┌─────────────────┐                   │
│  │  HTTP Proxy     │    │  MCP Server     │                   │
│  │    Router       │    │    Manager      │                   │
│  │                 │    │                 │                   │
│  │ • Team-aware    │    │ • Process       │                   │
│  │ • OAuth 2.1     │    │   Lifecycle     │                   │
│  │ • Load Balance  │    │ • stdio Comm    │                   │
│  └─────────────────┘    └─────────────────┘                   │
│                                                                 │
│  ┌─────────────────┐    ┌─────────────────┐                   │
│  │  Team Resource  │    │   Backend       │                   │
│  │    Manager      │    │ Communicator    │                   │
│  │                 │    │                 │                   │
│  │ • Namespaces    │    │ • HTTP Polling  │                   │
│  │ • cgroups       │    │ • Config Sync   │                   │
│  │ • Isolation     │    │ • Status Report │                   │
│  └─────────────────┘    └─────────────────┘                   │
│                                                                 │
│  ┌─────────────────────────────────────────┐                   │
│  │        Communication Manager            │                   │
│  │                                         │                   │
│  │ • JSON-RPC stdio    • HTTP Proxy       │                   │
│  │ • Process IPC       • Client Routing   │                   │
│  └─────────────────────────────────────────┘                   │
└─────────────────────────────────────────────────────────────────┘

Deployment Models

Global Satellites

Operated by DeployStack Team:
  • Infrastructure: Cloud-hosted (AWS, GCP, Azure)
  • Scope: Serve all teams with resource isolation
  • Scaling: Auto-scaling based on demand
  • Management: Centralized by DeployStack operations
  • Use Case: Teams wanting shared infrastructure
Architecture Benefits:
  • Zero Installation: URL-based configuration
  • Instant Availability: No setup or deployment required
  • Automatic Updates: Invisible to users
  • Global Scale: Multi-region deployment

Team Satellites

Customer-Deployed:
  • Infrastructure: Customer’s corporate networks
  • Scope: Single team exclusive access
  • Scaling: Customer-controlled resources
  • Management: Team administrators
  • Use Case: Internal resource access, compliance requirements
Architecture Benefits:
  • Internal Access: Company databases, APIs, file systems
  • Data Sovereignty: Data never leaves corporate network
  • Complete Control: Customer owns infrastructure
  • Compliance Ready: Meets enterprise security requirements

Communication Patterns

Client-to-Satellite Communication

Multiple Transport Protocols:
  • SSE (Server-Sent Events): Real-time streaming with session management
  • Streamable HTTP: Chunked responses with optional sessions
  • Direct HTTP Tools: Standard REST API calls
Current Implementation:
MCP Client                    Satellite
    │                            │
    │──── GET /sse ─────────────▶│  (Establish SSE connection)
    │                            │
    │◀─── event: endpoint ──────│  (Session URL + heartbeat)
    │                            │
    │──── POST /message ────────▶│  (JSON-RPC via session)
    │                            │
    │◀─── Response via SSE ─────│  (Stream JSON-RPC response)
Session Management:
  • Session ID: 32-byte cryptographically secure identifier
  • Timeout: 30-minute automatic cleanup
  • Activity Tracking: Updated on each message
  • State Management: Client info and initialization status

Satellite-to-Backend Communication

HTTP Polling Pattern:
Satellite                    Backend
   │                           │
   │──── GET /api/satellites/{id}/commands ──▶│  (Poll for commands)
   │                           │
   │◀─── Commands Response ────│  (Configuration, tasks)
   │                           │
   │──── POST /api/satellites/{id}/heartbeat ─▶│  (Report status, metrics)
   │                           │
   │◀─── Acknowledgment ───────│  (Confirm receipt)
Communication Features:
  • Outbound Only: Firewall-friendly
  • Priority-Based Polling: Four modes (immediate/high/normal/slow) with automatic transitions
  • Command Queue: Priority-based task processing with expiration and correlation IDs
  • Status Reporting: Real-time health and metrics every 30 seconds
  • Configuration Sync: Dynamic MCP server configuration updates
  • Error Recovery: Exponential backoff with maximum 5-minute intervals
  • 3-Second Response Time: Immediate priority commands enable near real-time responses
For complete implementation details, see Backend Polling Implementation.

Real-Time Event System

The satellite emits typed events for status changes, logs, and tool metadata. Events enable real-time monitoring without polling. Difference from Heartbeat:
  • Heartbeat (every 30s): Aggregate metrics, system health, resource usage
  • Events (immediate): Point-in-time status updates, precise timestamps
See Event Emission for complete event types, payloads, and batching configuration.

Status Tracking System

The satellite tracks per-user MCP server instance health through a 12-state status system that drives tool availability and automatic recovery. Per-User Status Tracking:
  • Status Location: mcpServerInstances table (per user)
  • No Installation Status: Status fields completely removed from mcpServerInstallations
  • Independent Tracking: Each team member has independent status for each MCP server
  • User-Specific Filtering: Users see only tools from their OWN instances that are online
Status Values:
  • User configuration: awaiting_user_config (new - user hasn’t configured required user-level fields)
  • Instance lifecycle: provisioning, command_received, connecting, discovering_tools, syncing_tools
  • Healthy state: online (tools available)
  • Configuration changes: restarting
  • Failure states: offline, error, requires_reauth, permanently_failed
Status Integration:
  • Tool Filtering: Tools from user’s non-online instances hidden from discovery
  • Auto-Recovery: Offline instances auto-recover when responsive
  • Event Emission: Status changes emitted immediately to backend with user_id field
  • Backend Filtering: Instances with awaiting_user_config NOT sent to satellite (prevents spawn crashes)
See Status Tracking for complete status lifecycle and transitions. See Instance Lifecycle for per-user instance creation and management.

Log Capture System

The satellite captures and batches two types of logs for debugging and monitoring: server logs (stderr output) and request logs (tool execution with full request/response data). See Log Capture for buffering implementation, batching configuration, backend storage limits, and privacy controls.

Security Architecture

Current Security (No Authentication)

Session-Based Isolation:
  • Cryptographic Session IDs: 32-byte secure identifiers
  • Session Timeout: 30-minute automatic cleanup
  • Activity Tracking: Prevents session hijacking
  • Error Handling: Secure error responses

Security Features (Implemented)

Per-User Instance Isolation:
  • Process Isolation: Each user’s instance runs in isolated process
  • Independent Lifecycle: Terminating one user’s process doesn’t affect teammates
  • User-Specific Config: Merged Template + Team + User configuration per instance
  • Status Isolation: Each user’s instance has independent status tracking
Team and User Separation:
  • OAuth Token Context: Team ID AND User ID extracted from tokens
  • Instance Resolution: Tools route to user’s specific instance (not teammates)
  • Database Separation: mcpServerInstances table tracks per-user instances
Resource Management (stdio processes):
  • nsjail Isolation: PID, network, filesystem isolation in production
  • Resource Quotas: 100MB RAM, 60s CPU time per process
  • Development Mode: Direct spawn() without isolation for cross-platform development
Authentication & Authorization:
  • OAuth 2.1 Resource Server: Backend token validation with 5-minute caching
  • User Context: Automatic user and team resolution from tokens
  • Per-User Access Control: Users only access their OWN instances
See Team Isolation for complete implementation details.

MCP Server Management

Dual MCP Server Support

stdio Subprocess Servers:
  • Per-User Instances: Each team member has their own process for each MCP server
  • Local Execution: MCP servers as Node.js child processes
  • JSON-RPC Communication: Full MCP protocol 2025-11-05 over stdin/stdout
  • Process Lifecycle: Per-user spawn, monitor, auto-restart (max 3 attempts), terminate
  • Instance Isolation: Processes tracked by team_id AND user_id with independent lifecycles
  • ProcessId Format: {server_slug}-{team_slug}-{user_slug}-{installation_id}
  • Tool Discovery: Automatic tool caching with per-user namespacing
  • Resource Limits: nsjail in production (100MB RAM, 60s CPU, 50 processes per user)
  • Development Mode: Plain spawn() on all platforms for easy debugging
HTTP Proxy Servers:
  • External Endpoints: Proxy to remote MCP servers
  • Load Balancing: Distribute requests across instances
  • Health Monitoring: Endpoint availability checks
  • Tool Discovery: Automatic at startup from remote endpoints

Process Management

Lifecycle Operations:
Configuration → Spawn → Monitor → Health Check → Restart/Terminate
      │           │        │          │              │
      │           │        │          │              │
   Backend     Child     Metrics   Failure      Cleanup
   Command    Process   Collection Detection   Resources
Health Monitoring:
  • Process Health: CPU, memory, responsiveness
  • MCP Protocol: Tool availability, response times
  • Automatic Recovery: Restart failed processes
  • Resource Limits: Enforce team quotas

Technical Implementation Details

Current Implementation Specifications

  • Session ID Length: 32 bytes base64url encoded
  • Session Timeout: 30 minutes of inactivity
  • JSON-RPC Version: 2.0 strict compliance
  • HTTP Framework: Fastify with JSON Schema validation
  • Logging: Pino structured logging with operation tracking
  • Error Handling: Complete HTTP status code mapping

Planned Resource Jailing Specifications

  • CPU Limit: 0.1 cores per MCP server process
  • Memory Limit: 100MB RAM per MCP server process
  • Process Timeout: 5-minute idle timeout for automatic cleanup
  • Isolation Method: Linux namespaces + cgroups v2

Technology Stack

  • HTTP Framework: Fastify with @fastify/http-proxy (implemented)
  • Process Communication: stdio JSON-RPC for local MCP servers (implemented)
  • Authentication: OAuth 2.1 Resource Server with token introspection (implemented)
  • Per-User Instance Management: ProcessManager with team and user tracking (implemented)
  • Logging: Pino structured logging
  • Build System: TypeScript + Webpack

Development Setup

Clone and Setup:
git clone https://github.com/deploystackio/deploystack.git
cd deploystack/services/satellite
npm install
cp .env.example .env
npm run dev
Test MCP Transport:
# Test MCP connection
curl -X POST "http://localhost:3001/mcp" \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":"1","method":"initialize","params":{}}'

# Test SSE streaming
curl -N -H "Accept: text/event-stream" "http://localhost:3001/mcp"
For testing the hierarchical router (tool discovery and execution), see Hierarchical Router Implementation. MCP Client Configuration:
{
  "mcpServers": {
    "deploystack-satellite": {
      "command": "npx",
      "args": ["@modelcontextprotocol/server-fetch"],
      "env": {
        "MCP_SERVER_URL": "http://localhost:3001/mcp"
      }
    }
  }
}

Implementation Status

The satellite service has completed MCP Transport Implementation and Backend Integration. Current implementation provides: MCP Transport Layer:
  • Complete MCP Transport Layer: SSE, SSE Messaging, Streamable HTTP
  • Session Management: Cryptographically secure with automatic cleanup
  • JSON-RPC 2.0 Compliance: Full protocol support with error handling
Backend Integration:
  • Command Polling Service: Adaptive polling with three modes (normal/immediate/error)
  • Dynamic Configuration Management: Replaces hardcoded MCP server configurations
  • Command Processing: HTTP MCP server management (spawn/kill/restart/health_check)
  • Heartbeat Service: Process status reporting and system metrics
  • Configuration Sync: Real-time MCP server configuration updates
  • Event System: Real-time event emission with automatic batching (13 event types including tool metadata)
Foundation Infrastructure:
  • HTTP Server: Fastify with Swagger documentation
  • Logging System: Pino with structured logging
  • Build Pipeline: TypeScript compilation and bundling
  • Development Workflow: Hot reload and code quality tools
  • Background Jobs System: Cron-like job management for recurring tasks
For details on the background jobs system, see Background Jobs System.