Skip to main content

Satellite Backend Communication

DeployStack Satellite implements outbound-only HTTP polling communication with the Backend, following the GitHub Actions runner pattern for enterprise firewall compatibility. This document describes the communication implementation from the satellite perspective.

Communication Pattern

HTTP Polling Architecture

Satellites initiate all communication using outbound HTTPS requests:
Satellite                    Backend
   │                           │
   │──── GET /commands ────────▶│  (Poll for pending commands)
   │                           │
   │◀─── Commands Response ────│  (MCP server tasks)
   │                           │
   │──── POST /heartbeat ──────▶│  (Report status, metrics)
   │                           │
   │◀─── Acknowledgment ───────│  (Confirm receipt)
Firewall Benefits:
  • Works through corporate firewalls without inbound rules
  • Functions behind network address translation (NAT)
  • Supports corporate HTTP proxies
  • No exposed satellite endpoints required

Adaptive Polling Strategy

Satellites adjust polling frequency based on Backend guidance:
  • Immediate Mode: 2-second intervals when urgent commands pending
  • Normal Mode: 30-second intervals for routine operations
  • Backoff Mode: Exponential backoff up to 5 minutes on errors
  • Maintenance Mode: Reduced polling during maintenance windows

Communication Channels

The satellite uses three distinct communication channels with the Backend: 1. Command Polling (Backend → Satellite)
  • Backend creates commands, satellite polls and executes
  • Adaptive intervals: 2-60 seconds based on command priority
  • Used for: MCP server configuration, process management, system updates
  • Direction: Backend initiates, satellite responds
2. Heartbeat (Satellite → Backend, Periodic)
  • Satellite reports status every 30 seconds
  • Contains: System metrics, process counts, resource usage
  • Used for: Health monitoring, capacity planning, aggregate analytics
  • Direction: Satellite reports on fixed schedule
3. Events (Satellite → Backend, Immediate)
  • Satellite emits events when actions occur, batched every 3 seconds
  • Contains: Point-in-time occurrences with precise timestamps
  • Used for: Real-time UI updates, audit trails, user notifications
  • Direction: Satellite reports immediately (not waiting for heartbeat)
For detailed event system documentation, see Event System.

Current Implementation

Phase 1: Basic Connection Testing ✅

The satellite currently implements basic Backend connectivity: Environment Configuration:
# .env file
DEPLOYSTACK_BACKEND_URL=http://localhost:3000
Backend Client Service:
  • Connection testing with 5-second timeout
  • Health endpoint validation at /api/health
  • Structured error responses with timing metrics
  • Last connection status and response time tracking
Fail-Fast Startup Logic:
const connectionStatus = await backendClient.testConnection();
if (connectionStatus.connection_status === 'connected') {
  server.log.info('✅ Backend connection verified');
} else {
  server.log.error('❌ Backend unreachable - satellite cannot start');
  process.exit(1);
}
Debug Endpoint:
  • GET /api/status/backend - Returns connection status for troubleshooting

Phase 2: Satellite Registration ✅

Satellite registration is now fully implemented with secure JWT-based token authentication preventing unauthorized satellite connections. For complete registration documentation, see Satellite Registration. For backend token management details, see Registration Token Authentication.

Phase 3: Heartbeat Authentication ✅

API Key Authentication:
  • Bearer token authentication implemented for heartbeat requests
  • API key validation using argon2 hash verification
  • Automatic key rotation on satellite re-registration
Heartbeat Implementation:
  • 30-second interval heartbeat reporting
  • System metrics collection (CPU, memory, uptime)
  • Process status reporting (empty array for now)
  • Authenticated communication with Backend

Phase 4: Command Polling ✅

Command Polling Implementation:
  • Adaptive polling intervals based on command priorities
  • Command queue processing with immediate, high, and normal priorities
  • Status reporting and acknowledgment system
  • Automatic polling mode switching based on pending commands
Priority-Based Polling:
  • immediate priority commands trigger 2-second polling intervals
  • high priority commands trigger 10-second polling intervals
  • normal priority commands trigger 30-second polling intervals
  • No pending commands default to 60-second polling intervals
Command Processing:
  • MCP installation commands trigger configuration refresh
  • MCP deletion commands trigger process cleanup
  • System update commands trigger component updates
  • Command completion reporting with correlation IDs

Communication Components

Command Polling

Scope-Aware Endpoints:
  • Global Satellites: /api/satellites/global/{satelliteId}/commands
  • Team Satellites: /api/teams/{teamId}/satellites/{satelliteId}/commands
Polling Optimization:
  • X-Last-Poll header for incremental updates
  • Backend-guided polling intervals
  • Command priority handling
  • Automatic retry with exponential backoff

Status Reporting

Heartbeat Communication:
  • System metrics (CPU, memory, disk usage)
  • Process status for all running MCP servers
  • Network information and connectivity status
  • Performance metrics and error counts
Command Result Reporting:
  • Execution status and timing
  • Process spawn results
  • Error logs and diagnostics
  • Correlation ID tracking for user feedback

Resource Management

System Resource Limits

Per-Process Limits:
  • 0.1 CPU cores maximum per MCP server process
  • 100MB RAM maximum per MCP server process
  • 5-minute idle timeout for automatic cleanup
  • Maximum 50 concurrent processes per satellite
Enforcement Methods:
  • Linux cgroups v2 for CPU and memory limits
  • Process monitoring with automatic termination
  • Resource usage reporting to Backend
  • Early warning at 80% resource utilization

Team Isolation

Process-Level Isolation:
  • Dedicated system users per team (satellite-team-123)
  • Separate process groups for complete isolation
  • Team-specific directories and permissions
  • Network namespace isolation (optional)
Resource Boundaries:
  • Team-scoped resource quotas
  • Isolated credential management
  • Separate logging and audit trails
  • Team-aware command filtering

MCP Server Management

Dual MCP Server Support

stdio Subprocess Servers:
  • Local MCP servers as child processes
  • JSON-RPC communication over stdio
  • Process lifecycle management (spawn, monitor, terminate)
  • Team isolation with dedicated system users
HTTP Proxy Servers:
  • External MCP server endpoints
  • Reverse proxy with load balancing
  • Health monitoring and failover
  • Request/response caching

Process Lifecycle

Spawn Process:
  1. Receive spawn command from Backend
  2. Validate team permissions and resource limits
  3. Create isolated process environment
  4. Start MCP server with stdio communication
  5. Report process status to Backend
Monitor Process:
  • Continuous health checking
  • Resource usage monitoring
  • Automatic restart on failure
  • Performance metrics collection
Terminate Process:
  • Graceful shutdown with SIGTERM
  • Force kill with SIGKILL after timeout
  • Resource cleanup and deallocation
  • Final status report to Backend

Internal Architecture

Five Core Components

1. HTTP Proxy Router
  • Team-aware request routing
  • OAuth 2.1 Resource Server integration
  • Load balancing across MCP server instances
  • Request/response logging for audit
2. MCP Server Manager
  • Process lifecycle management
  • stdio JSON-RPC communication
  • Health monitoring and restart logic
  • Resource limit enforcement
3. Team Resource Manager
  • Linux namespaces and cgroups setup
  • Team-specific user and directory creation
  • Resource quota enforcement
  • Credential injection and isolation
4. Backend Communicator
  • HTTP polling with adaptive intervals
  • Command queue processing
  • Status and metrics reporting
  • Configuration synchronization
5. Communication Manager
  • stdio JSON-RPC protocol handling
  • HTTP proxy request routing
  • Session management and cleanup
  • Error handling and recovery

Technology Stack

Core Technologies

HTTP Framework:
  • Fastify with @fastify/http-proxy for reverse proxy
  • JSON Schema validation for all requests
  • Pino structured logging
  • TypeScript with full type safety
Process Management:
  • Node.js child_process for MCP server spawning
  • stdio JSON-RPC communication
  • Process monitoring with health checks
  • Graceful shutdown handling
Security:
  • OAuth 2.1 Resource Server for authentication
  • Linux namespaces for process isolation
  • cgroups v2 for resource limits
  • Secure credential management

Development Setup

Local Development

# Clone and setup
git clone https://github.com/deploystackio/deploystack.git
cd deploystack/services/satellite
npm install

# Configure environment
cp .env.example .env
# Edit DEPLOYSTACK_BACKEND_URL and add DEPLOYSTACK_REGISTRATION_TOKEN
# Obtain registration token from backend admin interface first

# Start development server
npm run dev
# Server runs on http://localhost:3001

Environment Configuration

# Required environment variables
DEPLOYSTACK_BACKEND_URL=http://localhost:3000
DEPLOYSTACK_SATELLITE_NAME=dev-satellite-001
DEPLOYSTACK_REGISTRATION_TOKEN=deploystack_satellite_global_eyJhbGc...
LOG_LEVEL=debug
PORT=3001

# Optional configuration
NODE_ENV=development
Note: DEPLOYSTACK_REGISTRATION_TOKEN is only required for initial satellite pairing. Once registered, satellites use their permanent API keys for all communication.

Testing Backend Communication

# Test current connection
curl http://localhost:3001/api/status/backend

# Expected response
{
  "backend_url": "http://localhost:3000",
  "connection_status": "connected",
  "response_time_ms": 45,
  "last_check": "2025-01-05T10:30:00Z"
}

Database Integration

The Backend maintains satellite state in five tables:
  • satellites - Satellite registry and configuration
  • satelliteCommands - Command queue management
  • satelliteProcesses - Process status tracking
  • satelliteUsageLogs - Usage analytics and audit
  • satelliteHeartbeats - Health monitoring data
See services/backend/src/db/schema.sqlite.ts for complete schema definitions.

Security Implementation

Authentication Flow

Registration Phase:
  1. Admin generates JWT registration token via backend API
  2. Satellite includes token in Authorization header during registration
  3. Backend validates token signature, scope, and expiration
  4. Backend consumes single-use token and issues permanent API key
  5. Satellite stores API key securely for ongoing communication
For detailed token validation process, see Registration Security. Operational Phase:
  1. All requests include Authorization: Bearer {api_key}
  2. Backend validates API key and satellite scope
  3. Team context extracted from satellite registration
  4. Commands filtered based on team permissions

Team Isolation Security

Process Security:
  • Each team gets dedicated system user
  • Process trees isolated with Linux namespaces
  • File system permissions prevent cross-team access
  • Network isolation optional for enhanced security
Credential Management:
  • Team credentials injected into process environment
  • No credential sharing between teams
  • Secure credential storage and rotation
  • Audit logging for all credential access

Monitoring and Observability

Structured Logging

Log Context:
server.log.info({
  satelliteId: 'satellite-01',
  teamId: 'team-123',
  operation: 'mcp_server_spawn',
  serverId: 'filesystem-server',
  duration: '2.3s'
}, 'MCP server spawned successfully');
Log Levels:
  • trace: Detailed communication flows
  • debug: Development debugging
  • info: Normal operations
  • warn: Resource limits, restarts
  • error: Process failures, communication errors
  • fatal: Satellite crashes

Metrics Collection

System Metrics:
  • CPU, memory, disk usage per satellite
  • Process count and resource utilization
  • Network connectivity and latency
  • Error rates and failure patterns
Business Metrics:
  • MCP tool usage per team
  • Process spawn/termination rates
  • Resource efficiency metrics
  • User activity patterns

Implementation Status

Current Status:
  • ✅ Basic Backend connection testing
  • ✅ Fail-fast startup logic
  • ✅ Debug endpoint for troubleshooting
  • ✅ Environment configuration
  • ✅ Satellite registration with upsert logic
  • ✅ API key generation and management
  • ✅ Bearer token authentication for requests
  • ✅ Command polling loop with adaptive intervals
  • ✅ Backend command creation system
  • 🚧 Satellite command processing (in progress)
  • 🚧 Process management (planned)
  • 🚧 Team isolation (planned)
Next Milestones:
  1. Complete satellite command processing implementation
  2. Build MCP server process management
  3. Implement team isolation and resource limits
  4. Add comprehensive monitoring and alerting
  5. End-to-end testing and performance validation
The satellite communication system is designed for enterprise deployment with complete team isolation, resource management, and audit logging while maintaining the developer experience that defines the DeployStack platform.
I