Satellite Backend Communication
DeployStack Satellite implements outbound-only HTTP polling communication with the Backend, following the GitHub Actions runner pattern for enterprise firewall compatibility. This document describes the communication implementation from the satellite perspective.Communication Pattern
HTTP Polling Architecture
Satellites initiate all communication using outbound HTTPS requests:- Works through corporate firewalls without inbound rules
- Functions behind network address translation (NAT)
- Supports corporate HTTP proxies
- No exposed satellite endpoints required
Adaptive Polling Strategy
Satellites adjust polling frequency based on Backend guidance:- Immediate Mode: 2-second intervals when urgent commands pending
- Normal Mode: 30-second intervals for routine operations
- Backoff Mode: Exponential backoff up to 5 minutes on errors
- Maintenance Mode: Reduced polling during maintenance windows
Communication Channels
The satellite uses three distinct communication channels with the Backend: 1. Command Polling (Backend → Satellite)- Backend creates commands, satellite polls and executes
- Adaptive intervals: 2-60 seconds based on command priority
- Used for: MCP server configuration, process management, system updates
- Direction: Backend initiates, satellite responds
- Satellite reports status every 30 seconds
- Contains: System metrics, process counts, resource usage
- Used for: Health monitoring, capacity planning, aggregate analytics
- Direction: Satellite reports on fixed schedule
- Satellite emits events when actions occur, batched every 3 seconds
- Contains: Point-in-time occurrences with precise timestamps
- Used for: Real-time UI updates, audit trails, user notifications
- Direction: Satellite reports immediately (not waiting for heartbeat)
Current Implementation
Phase 1: Basic Connection Testing ✅
The satellite currently implements basic Backend connectivity: Environment Configuration:- Connection testing with 5-second timeout
- Health endpoint validation at
/api/health
- Structured error responses with timing metrics
- Last connection status and response time tracking
GET /api/status/backend
- Returns connection status for troubleshooting
Phase 2: Satellite Registration ✅
Satellite registration is now fully implemented with secure JWT-based token authentication preventing unauthorized satellite connections. For complete registration documentation, see Satellite Registration. For backend token management details, see Registration Token Authentication.Phase 3: Heartbeat Authentication ✅
API Key Authentication:- Bearer token authentication implemented for heartbeat requests
- API key validation using argon2 hash verification
- Automatic key rotation on satellite re-registration
- 30-second interval heartbeat reporting
- System metrics collection (CPU, memory, uptime)
- Process status reporting (empty array for now)
- Authenticated communication with Backend
Phase 4: Command Polling ✅
Command Polling Implementation:- Adaptive polling intervals based on command priorities
- Command queue processing with immediate, high, and normal priorities
- Status reporting and acknowledgment system
- Automatic polling mode switching based on pending commands
immediate
priority commands trigger 2-second polling intervalshigh
priority commands trigger 10-second polling intervalsnormal
priority commands trigger 30-second polling intervals- No pending commands default to 60-second polling intervals
- MCP installation commands trigger configuration refresh
- MCP deletion commands trigger process cleanup
- System update commands trigger component updates
- Command completion reporting with correlation IDs
Communication Components
Command Polling
Scope-Aware Endpoints:- Global Satellites:
/api/satellites/global/{satelliteId}/commands
- Team Satellites:
/api/teams/{teamId}/satellites/{satelliteId}/commands
X-Last-Poll
header for incremental updates- Backend-guided polling intervals
- Command priority handling
- Automatic retry with exponential backoff
Status Reporting
Heartbeat Communication:- System metrics (CPU, memory, disk usage)
- Process status for all running MCP servers
- Network information and connectivity status
- Performance metrics and error counts
- Execution status and timing
- Process spawn results
- Error logs and diagnostics
- Correlation ID tracking for user feedback
Resource Management
System Resource Limits
Per-Process Limits:- 0.1 CPU cores maximum per MCP server process
- 100MB RAM maximum per MCP server process
- 5-minute idle timeout for automatic cleanup
- Maximum 50 concurrent processes per satellite
- Linux cgroups v2 for CPU and memory limits
- Process monitoring with automatic termination
- Resource usage reporting to Backend
- Early warning at 80% resource utilization
Team Isolation
Process-Level Isolation:- Dedicated system users per team (
satellite-team-123
) - Separate process groups for complete isolation
- Team-specific directories and permissions
- Network namespace isolation (optional)
- Team-scoped resource quotas
- Isolated credential management
- Separate logging and audit trails
- Team-aware command filtering
MCP Server Management
Dual MCP Server Support
stdio Subprocess Servers:- Local MCP servers as child processes
- JSON-RPC communication over stdio
- Process lifecycle management (spawn, monitor, terminate)
- Team isolation with dedicated system users
- External MCP server endpoints
- Reverse proxy with load balancing
- Health monitoring and failover
- Request/response caching
Process Lifecycle
Spawn Process:- Receive spawn command from Backend
- Validate team permissions and resource limits
- Create isolated process environment
- Start MCP server with stdio communication
- Report process status to Backend
- Continuous health checking
- Resource usage monitoring
- Automatic restart on failure
- Performance metrics collection
- Graceful shutdown with SIGTERM
- Force kill with SIGKILL after timeout
- Resource cleanup and deallocation
- Final status report to Backend
Internal Architecture
Five Core Components
1. HTTP Proxy Router- Team-aware request routing
- OAuth 2.1 Resource Server integration
- Load balancing across MCP server instances
- Request/response logging for audit
- Process lifecycle management
- stdio JSON-RPC communication
- Health monitoring and restart logic
- Resource limit enforcement
- Linux namespaces and cgroups setup
- Team-specific user and directory creation
- Resource quota enforcement
- Credential injection and isolation
- HTTP polling with adaptive intervals
- Command queue processing
- Status and metrics reporting
- Configuration synchronization
- stdio JSON-RPC protocol handling
- HTTP proxy request routing
- Session management and cleanup
- Error handling and recovery
Technology Stack
Core Technologies
HTTP Framework:- Fastify with
@fastify/http-proxy
for reverse proxy - JSON Schema validation for all requests
- Pino structured logging
- TypeScript with full type safety
- Node.js
child_process
for MCP server spawning - stdio JSON-RPC communication
- Process monitoring with health checks
- Graceful shutdown handling
- OAuth 2.1 Resource Server for authentication
- Linux namespaces for process isolation
- cgroups v2 for resource limits
- Secure credential management
Development Setup
Local Development
Environment Configuration
DEPLOYSTACK_REGISTRATION_TOKEN
is only required for initial satellite pairing. Once registered, satellites use their permanent API keys for all communication.
Testing Backend Communication
Database Integration
The Backend maintains satellite state in five tables:satellites
- Satellite registry and configurationsatelliteCommands
- Command queue managementsatelliteProcesses
- Process status trackingsatelliteUsageLogs
- Usage analytics and auditsatelliteHeartbeats
- Health monitoring data
services/backend/src/db/schema.sqlite.ts
for complete schema definitions.
Security Implementation
Authentication Flow
Registration Phase:- Admin generates JWT registration token via backend API
- Satellite includes token in Authorization header during registration
- Backend validates token signature, scope, and expiration
- Backend consumes single-use token and issues permanent API key
- Satellite stores API key securely for ongoing communication
- All requests include
Authorization: Bearer {api_key}
- Backend validates API key and satellite scope
- Team context extracted from satellite registration
- Commands filtered based on team permissions
Team Isolation Security
Process Security:- Each team gets dedicated system user
- Process trees isolated with Linux namespaces
- File system permissions prevent cross-team access
- Network isolation optional for enhanced security
- Team credentials injected into process environment
- No credential sharing between teams
- Secure credential storage and rotation
- Audit logging for all credential access
Monitoring and Observability
Structured Logging
Log Context:trace
: Detailed communication flowsdebug
: Development debugginginfo
: Normal operationswarn
: Resource limits, restartserror
: Process failures, communication errorsfatal
: Satellite crashes
Metrics Collection
System Metrics:- CPU, memory, disk usage per satellite
- Process count and resource utilization
- Network connectivity and latency
- Error rates and failure patterns
- MCP tool usage per team
- Process spawn/termination rates
- Resource efficiency metrics
- User activity patterns
Implementation Status
Current Status:- ✅ Basic Backend connection testing
- ✅ Fail-fast startup logic
- ✅ Debug endpoint for troubleshooting
- ✅ Environment configuration
- ✅ Satellite registration with upsert logic
- ✅ API key generation and management
- ✅ Bearer token authentication for requests
- ✅ Command polling loop with adaptive intervals
- ✅ Backend command creation system
- 🚧 Satellite command processing (in progress)
- 🚧 Process management (planned)
- 🚧 Team isolation (planned)
- Complete satellite command processing implementation
- Build MCP server process management
- Implement team isolation and resource limits
- Add comprehensive monitoring and alerting
- End-to-end testing and performance validation
The satellite communication system is designed for enterprise deployment with complete team isolation, resource management, and audit logging while maintaining the developer experience that defines the DeployStack platform.