Technical Overview
Edge Worker Pattern
Satellites operate as edge workers similar to GitHub Actions runners, providing:- MCP Transport Protocols: SSE, Streamable HTTP, Direct HTTP communication
- Dual MCP Server Management: HTTP proxy + stdio subprocess support (ready for implementation)
- Team Isolation: nsjail sandboxing with built-in resource limits (ready for implementation)
- OAuth 2.1 Resource Server: Token introspection with Backend
- Backend Polling Communication: Outbound-only, firewall-friendly
- Real-Time Event System: Immediate satellite → backend event emission with automatic batching
- Process Lifecycle Management: Spawn, monitor, terminate MCP servers (ready for implementation)
- Background Jobs System: Cron-like recurring tasks with automatic error handling
Current Implementation Architecture
MCP SDK Transport Layer
The satellite uses the official@modelcontextprotocol/sdk for all MCP client communication:
MCP Transport Endpoints
Active Endpoints:GET /mcp- Establish SSE stream via MCP SDKPOST /mcp- Send JSON-RPC messages via MCP SDKDELETE /mcp- Session termination via MCP SDK
Core SDK Components
MCP Server Wrapper:- Official SDK Server integration with Fastify
- Standard MCP protocol method handlers
- Automatic session and transport management
- Integration with existing tool discovery and process management
- StreamableHTTPClientTransport for external server communication
- Automatic connection establishment and cleanup
- Standard MCP method execution (listTools, callTool)
- Built-in error handling and retry logic
MCP Protocol Implementation
Supported MCP Methods:initialize- MCP session initialization (SDK automatic)notifications/initialized- Client initialization completetools/list- List available meta-tools (hierarchical router: 2 tools only)tools/call- Execute meta-tools or route to actual MCP serversresources/list- List available resources (returns empty array)resources/templates/list- List resource templates (returns empty array)prompts/list- List available prompts (returns empty array)
Hierarchical Router: The satellite exposes only 2 meta-tools to MCP clients (
discover_mcp_tools and execute_mcp_tool) instead of all available tools. This solves the MCP context window consumption problem by reducing token usage by 95%+. See Hierarchical Router Implementation for details.- Standard JSON-RPC 2.0 compliant error responses via SDK
- Automatic HTTP status code mapping
- Structured error logging with operation tracking
- Built-in session validation and error reporting
Planned Full Architecture
Three-Tier System Design
Satellite Internal Architecture (Planned)
Each satellite instance will contain five core components:Deployment Models
Global Satellites
Operated by DeployStack Team:- Infrastructure: Cloud-hosted (AWS, GCP, Azure)
- Scope: Serve all teams with resource isolation
- Scaling: Auto-scaling based on demand
- Management: Centralized by DeployStack operations
- Use Case: Teams wanting shared infrastructure
- Zero Installation: URL-based configuration
- Instant Availability: No setup or deployment required
- Automatic Updates: Invisible to users
- Global Scale: Multi-region deployment
Team Satellites
Customer-Deployed:- Infrastructure: Customer’s corporate networks
- Scope: Single team exclusive access
- Scaling: Customer-controlled resources
- Management: Team administrators
- Use Case: Internal resource access, compliance requirements
- Internal Access: Company databases, APIs, file systems
- Data Sovereignty: Data never leaves corporate network
- Complete Control: Customer owns infrastructure
- Compliance Ready: Meets enterprise security requirements
Communication Patterns
Client-to-Satellite Communication
Multiple Transport Protocols:- SSE (Server-Sent Events): Real-time streaming with session management
- Streamable HTTP: Chunked responses with optional sessions
- Direct HTTP Tools: Standard REST API calls
- Session ID: 32-byte cryptographically secure identifier
- Timeout: 30-minute automatic cleanup
- Activity Tracking: Updated on each message
- State Management: Client info and initialization status
Satellite-to-Backend Communication
HTTP Polling Pattern:- Outbound Only: Firewall-friendly
- Priority-Based Polling: Four modes (immediate/high/normal/slow) with automatic transitions
- Command Queue: Priority-based task processing with expiration and correlation IDs
- Status Reporting: Real-time health and metrics every 30 seconds
- Configuration Sync: Dynamic MCP server configuration updates
- Error Recovery: Exponential backoff with maximum 5-minute intervals
- 3-Second Response Time: Immediate priority commands enable near real-time responses
Real-Time Event System
Event Emission with Batching:- Immediate Emission: Events emitted when actions occur (not delayed by 30s heartbeat)
- Automatic Batching: Events collected for 3 seconds, then sent as single batch (max 100 events)
- Memory Management: In-memory queue (10,000 event limit) with overflow protection
- Graceful Error Handling: 429 exponential backoff, 400 drops invalid events, 500/network errors retry
- 10 Event Types: Server lifecycle, client connections, tool discovery, configuration updates
- Heartbeat (every 30s): Aggregate metrics, system health, resource usage
- Events (immediate): Point-in-time occurrences, user actions, precise timestamps
Security Architecture
Current Security (No Authentication)
Session-Based Isolation:- Cryptographic Session IDs: 32-byte secure identifiers
- Session Timeout: 30-minute automatic cleanup
- Activity Tracking: Prevents session hijacking
- Error Handling: Secure error responses
Planned Security Features
Team Isolation:- Linux Namespaces: PID, network, filesystem isolation
- Process Groups: Separate process trees per team
- User Isolation: Dedicated system users per team
- cgroups v2: CPU and memory limits
- Resource Quotas: 0.1 CPU cores, 100MB RAM per process
- Automatic Cleanup: 5-minute idle timeout
- OAuth 2.1 Resource Server: Backend token validation
- Scope-Based Access: Fine-grained permissions
- Team Context: Automatic team resolution from tokens
MCP Server Management
Dual MCP Server Support
stdio Subprocess Servers:- Local Execution: MCP servers as Node.js child processes
- JSON-RPC Communication: Full MCP protocol 2025-11-05 over stdin/stdout
- Process Lifecycle: Spawn, monitor, auto-restart (max 3 attempts), terminate
- Team Isolation: Processes tracked by team_id with environment-based security
- Tool Discovery: Automatic tool caching with namespacing
- Resource Limits: nsjail in production (100MB RAM, 60s CPU, 50 processes)
- Development Mode: Plain spawn() on all platforms for easy debugging
- External Endpoints: Proxy to remote MCP servers
- Load Balancing: Distribute requests across instances
- Health Monitoring: Endpoint availability checks
- Tool Discovery: Automatic at startup from remote endpoints
Process Management
Lifecycle Operations:- Process Health: CPU, memory, responsiveness
- MCP Protocol: Tool availability, response times
- Automatic Recovery: Restart failed processes
- Resource Limits: Enforce team quotas
Technical Implementation Details
Current Implementation Specifications
- Session ID Length: 32 bytes base64url encoded
- Session Timeout: 30 minutes of inactivity
- JSON-RPC Version: 2.0 strict compliance
- HTTP Framework: Fastify with JSON Schema validation
- Logging: Pino structured logging with operation tracking
- Error Handling: Complete HTTP status code mapping
Planned Resource Jailing Specifications
- CPU Limit: 0.1 cores per MCP server process
- Memory Limit: 100MB RAM per MCP server process
- Process Timeout: 5-minute idle timeout for automatic cleanup
- Isolation Method: Linux namespaces + cgroups v2
Technology Stack
- HTTP Framework: Fastify with @fastify/http-proxy (planned)
- Process Communication: stdio JSON-RPC for local MCP servers (planned)
- Authentication: OAuth 2.1 Resource Server with token introspection (planned)
- Logging: Pino structured logging
- Build System: TypeScript + Webpack
Development Setup
Clone and Setup:Implementation Status
The satellite service has completed Phase 1: MCP Transport Implementation and Phase 4: Backend Integration. Current implementation provides: Phase 1 - MCP Transport Layer:- Complete MCP Transport Layer: SSE, SSE Messaging, Streamable HTTP
- Session Management: Cryptographically secure with automatic cleanup
- JSON-RPC 2.0 Compliance: Full protocol support with error handling
- Command Polling Service: Adaptive polling with three modes (normal/immediate/error)
- Dynamic Configuration Management: Replaces hardcoded MCP server configurations
- Command Processing: HTTP MCP server management (spawn/kill/restart/health_check)
- Heartbeat Service: Process status reporting and system metrics
- Configuration Sync: Real-time MCP server configuration updates
- Event System: Real-time event emission with automatic batching (13 event types including tool metadata)
- HTTP Server: Fastify with Swagger documentation
- Logging System: Pino with structured logging
- Build Pipeline: TypeScript compilation and bundling
- Development Workflow: Hot reload and code quality tools
- Background Jobs System: Cron-like job management for recurring tasks

