Skip to main content
This guide covers deploying a production-ready DeployStack Satellite with nsjail process isolation for secure multi-team environments. For development or single-team deployments, see the Quick Start guide.
When to use this guide:
  • Production deployments serving multiple teams
  • Enterprise environments with strict security requirements
  • Shared infrastructure where teams need complete isolation
  • Multi-tenant satellite deployments
For development or single-team usage, the Docker Compose setup is simpler and sufficient.

Overview

Production satellites provide enterprise-grade security through:
  • nsjail Process Isolation: Complete process separation per team with Linux namespaces and cgroup enforcement
  • Resource Limits: CPU, memory, and process limits per MCP server (virtual RAM unlimited via rlimit, 512MB physical RAM via cgroup when enabled, 60s CPU, 1000 processes)
  • Multi-Runtime Support: Node.js (npx) and Python (uvx) with runtime-aware isolation
  • Filesystem Jailing: Read-only system directories, isolated writable spaces per runtime
  • Non-Root Execution: Satellite runs as dedicated deploystack user
  • Audit Logging: Complete activity tracking with automatic rotation

Prerequisites

System Requirements

  • Operating System: Debian 13 (Trixie) - required for nsjail compatibility
  • RAM: Minimum 4GB (8GB+ recommended for multiple teams)
  • Storage: 20GB+ available disk space
  • Network: Outbound HTTPS access to DeployStack Backend
  • Access: Root/sudo access for initial setup

Required Knowledge

  • Linux system administration
  • systemd service management
  • Basic networking and firewall configuration

Installation Process

The installation follows a two-phase approach:
  1. System Setup: Install Node.js, nsjail, configure kernel (run once per server)
  2. Satellite Installation: Build satellite, configure, create systemd service
1

System Setup

Install system dependencies and configure the server for satellite operation.
2

Satellite Installation

Build the satellite service and configure it for your environment.
3

Service Configuration

Create systemd service and start the satellite.
4

Verification

Confirm the satellite is running correctly and registered with the backend.

Phase 1: System Setup

Install Node.js 24

DeployStack Satellite requires Node.js 24 for compatibility with the latest MCP protocol features.
# Add NodeSource repository
curl -fsSL https://deb.nodesource.com/setup_24.x | sudo bash -

# Install Node.js
sudo apt-get install -y nodejs

# Verify installation
node --version  # Should show v24.x.x
npm --version

Install Python and UV

DeployStack Satellite supports Python MCP servers via uvx (UV package runner).
# Install Python 3
sudo apt-get install -y python3 python3-pip

# Install UV (Python package manager)
curl -LsSf https://astral.sh/uv/install.sh | sh

# Verify installation
python3 --version  # Should show Python 3.x
uvx --version      # Should show uvx version
Python Runtime Support: The satellite automatically detects Python MCP servers and spawns them using uvx with runtime-aware isolation. Python and Node.js servers run in separate cache directories for complete isolation.

Install nsjail

nsjail provides the process isolation that enables secure multi-team satellite operation.
Why nsjail? nsjail uses Linux namespaces and cgroups to create completely isolated environments for each team’s MCP servers. This prevents teams from accessing each other’s data or interfering with other processes.
# Install build dependencies
sudo apt-get update
sudo apt-get install -y \
  autoconf \
  bison \
  flex \
  gcc \
  g++ \
  git \
  libprotobuf-dev \
  libnl-route-3-dev \
  libtool \
  make \
  pkg-config \
  protobuf-compiler

# Clone and build nsjail
cd /tmp
git clone --depth 1 https://github.com/google/nsjail.git
cd nsjail
make

# Install to system
sudo cp nsjail /usr/local/bin/
sudo chmod +x /usr/local/bin/nsjail

# Verify installation
nsjail --version

# Cleanup
cd /
rm -rf /tmp/nsjail

Configure Kernel for User Namespaces

nsjail requires unprivileged user namespaces to be enabled at the kernel level.
# Create sysctl configuration
echo 'kernel.unprivileged_userns_clone=1' | sudo tee /etc/sysctl.d/99-deploystack-userns.conf

# Apply immediately
sudo sysctl -p /etc/sysctl.d/99-deploystack-userns.conf

# Verify setting
cat /proc/sys/kernel/unprivileged_userns_clone
# Should return: 1
Important: This kernel setting is required for nsjail to function. Without it, all MCP server spawns will fail. The setting persists across reboots via the sysctl configuration file.

Create Service User

Create a dedicated non-root user for running the satellite service.
# Create deploystack user with home directory
sudo useradd -r -s /bin/bash -m -d /opt/deploystack deploystack

# Verify user creation
id deploystack

Set Up Logging Infrastructure

Configure log directories and rotation for the satellite service.
# Create log directory
sudo mkdir -p /var/log/deploystack-satellite
sudo chown deploystack:deploystack /var/log/deploystack-satellite
sudo chmod 755 /var/log/deploystack-satellite

# Create logrotate configuration
sudo tee /etc/logrotate.d/deploystack-satellite > /dev/null << 'EOF'
/var/log/deploystack-satellite/*.log {
    daily
    rotate 7
    compress
    delaycompress
    missingok
    notifempty
    create 0640 deploystack deploystack
    sharedscripts
    postrotate
        systemctl reload deploystack-satellite > /dev/null 2>&1 || true
    endscript
}
EOF
Log Rotation: Logs rotate daily and retain 7 days of history by default. Adjust the rotate value in the logrotate configuration if you need longer retention.

Phase 2: Satellite Installation

Clone or Copy Satellite Code

# Switch to deploystack user
sudo su - deploystack

# Clone the repository (or copy satellite code to /opt/deploystack)
cd /opt/deploystack
git clone https://github.com/deploystackio/deploystack.git
cd deploystack/services/satellite

Install Dependencies and Build

# Install npm dependencies
npm install

# Build TypeScript code
npm run build

# Verify build output
ls -la dist/index.js  # Should exist

Create MCP Cache Directory

# Create cache directory for MCP server packages (runtime-specific subdirectories created automatically)
mkdir -p /opt/deploystack/mcp-cache
Runtime-Aware Caching: The satellite automatically creates runtime-specific cache directories:
  • /opt/deploystack/mcp-cache/node/{team_id} - Node.js packages (npm)
  • /opt/deploystack/mcp-cache/python/{team_id} - Python packages (UV)
This ensures complete isolation between different runtimes and teams.

Create GitHub Deployment Base Directory

# Create base directory for GitHub-based MCP server deployments
# Required for tmpfs isolation of GitHub repository installations
mkdir -p /opt/mcp-deployments
chown deploystack:deploystack /opt/mcp-deployments
chmod 755 /opt/mcp-deployments
Critical Requirement: This directory is required for GitHub-based MCP server installations. Without it, the satellite will fail to start in production mode with a clear error message:
❌ FATAL: GitHub deployment base directory does not exist: /opt/mcp-deployments
Fix: sudo mkdir -p /opt/mcp-deployments && sudo chown deploystack:deploystack /opt/mcp-deployments
Why this directory is needed: When users install MCP servers directly from GitHub repositories (e.g., github:owner/repo#ref), the satellite:
  1. Downloads the GitHub tarball
  2. Creates a tmpfs mount at /opt/mcp-deployments/{team_id}/{installation_id}
  3. Extracts the code into the tmpfs mount (300MB size limit)
  4. Builds and runs the MCP server in isolated memory
This approach provides secure, isolated execution for GitHub-sourced MCP servers without polluting the filesystem.

Configure Environment

Create the .env file with your production configuration.
Registration Token: You must generate this token from your DeployStack admin interface before proceeding. Navigate to Admin → Satellites → Pairing to generate a global satellite token.
# Create .env file
cat > .env << 'EOF'
# DeployStack Satellite Configuration

# Server Configuration
PORT=3001
NODE_ENV=production
LOG_LEVEL=info

# Backend Connection
DEPLOYSTACK_BACKEND_URL=https://cloud.deploystack.io
DEPLOYSTACK_BACKEND_POLLING_INTERVAL=60

# Satellite Public URL (REQUIRED for remote MCP client connections)
# This is the publicly accessible URL where MCP clients connect
# Used for OAuth 2.0 Protected Resource Metadata (RFC 9728)
# Example: https://satellite.example.com (no /mcp or /sse paths)
DEPLOYSTACK_SATELLITE_URL=https://satellite.example.com

# Satellite Identity (10-32 chars, lowercase a-z0-9-_ only)
DEPLOYSTACK_SATELLITE_NAME=prod-satellite-001

# Registration Token (from admin panel)
DEPLOYSTACK_REGISTRATION_TOKEN=deploystack_satellite_global_eyJhbGc...

# Status Display
DEPLOYSTACK_STATUS_SHOW_UPTIME=true
DEPLOYSTACK_STATUS_SHOW_VERSION=true
DEPLOYSTACK_STATUS_SHOW_MCP_DEBUG_ROUTE=false

# Event System
EVENT_BATCH_INTERVAL_MS=3000
EVENT_MAX_BATCH_SIZE=100

# nsjail Resource Limits
NSJAIL_MEMORY_LIMIT_MB=inf               # Virtual memory limit — "inf" required for Node.js WASM (undici reserves ~10GB virtual address space)
NSJAIL_CGROUP_MEM_MAX_BYTES=536870912    # Physical memory limit: 512MB (cgroup, only active with Delegate=yes in systemd unit)
NSJAIL_CPU_TIME_LIMIT_SECONDS=60         # CPU time limit
NSJAIL_MAX_PROCESSES=1000                # Process limit (rlimit)
NSJAIL_CGROUP_PIDS_MAX=1000              # Process limit (cgroup)
NSJAIL_RLIMIT_NOFILE=1024                # File descriptor limit
NSJAIL_RLIMIT_FSIZE=50                   # Max file size in MB
NSJAIL_TMPFS_SIZE=100M                   # Tmpfs size for /tmp

# Process Idle Timeout (seconds, 0 to disable)
MCP_PROCESS_IDLE_TIMEOUT_SECONDS=180
EOF

# Secure the environment file
chmod 600 .env
Satellite Name Requirements:
  • Length: 10-32 characters
  • Characters: lowercase letters (a-z), numbers (0-9), hyphens (-), underscores (_)
  • No spaces or uppercase letters
  • Must be unique across your DeployStack deployment

Create Systemd Service

Exit back to root/sudo user to create the systemd service.
# Exit deploystack user
exit

# Create systemd service file
sudo tee /etc/systemd/system/deploystack-satellite.service > /dev/null << 'EOF'
[Unit]
Description=DeployStack Satellite Service
Documentation=https://docs.deploystack.io
After=network-online.target
Wants=network-online.target

[Service]
Type=simple
User=deploystack
Group=deploystack
WorkingDirectory=/opt/deploystack/deploystack/services/satellite

# Start command
ExecStart=/usr/bin/node --env-file=.env dist/index.js

# Logging
StandardOutput=append:/var/log/deploystack-satellite/satellite.log
StandardError=append:/var/log/deploystack-satellite/error.log

# Restart policy
Restart=always
RestartSec=10

# Security hardening
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/opt/deploystack/deploystack/services/satellite/persistent_data
ReadWritePaths=/var/log/deploystack-satellite
ReadWritePaths=/opt/deploystack/mcp-cache
ReadWritePaths=/opt/mcp-deployments

[Install]
WantedBy=multi-user.target
EOF

# Reload systemd
sudo systemctl daemon-reload
Security Features: The systemd service runs with several security hardening options:
  • NoNewPrivileges: Prevents privilege escalation
  • PrivateTmp: Isolated /tmp directory
  • ProtectSystem: Read-only system directories
  • ProtectHome: Restricted home directory access
  • ReadWritePaths: Only specific directories are writable

Start the Service

# Enable service for automatic startup
sudo systemctl enable deploystack-satellite

# Start the service
sudo systemctl start deploystack-satellite

# Check status
sudo systemctl status deploystack-satellite

Verification

Check Service Status

# View service status
sudo systemctl status deploystack-satellite

# View live logs
sudo tail -f /var/log/deploystack-satellite/satellite.log

# Check for errors
sudo tail -f /var/log/deploystack-satellite/error.log

Verify Port Listening

# Check if port 3001 is listening
sudo ss -tlnp | grep :3001

# Test health endpoint
curl http://localhost:3001/api/status/backend

Check Registration

Look for successful registration in the logs:
sudo grep "registered successfully" /var/log/deploystack-satellite/satellite.log
You should see:
✅ Satellite registered successfully: prod-satellite-001
🔑 API key received and ready for authenticated communication

Verify in Admin Interface

  1. Log in to your DeployStack admin interface
  2. Navigate to Admin → Satellites
  3. Confirm your satellite appears with status “Active”
  4. Check last heartbeat timestamp is recent

Service Management

Common Commands

# Start service
sudo systemctl start deploystack-satellite

# Stop service
sudo systemctl stop deploystack-satellite

# Restart service
sudo systemctl restart deploystack-satellite

# View status
sudo systemctl status deploystack-satellite

# Enable auto-start on boot
sudo systemctl enable deploystack-satellite

# Disable auto-start
sudo systemctl disable deploystack-satellite

# View logs (last 50 lines)
sudo journalctl -u deploystack-satellite -n 50

# Follow logs in real-time
sudo journalctl -u deploystack-satellite -f

Updating the Satellite

# Stop service
sudo systemctl stop deploystack-satellite

# Switch to deploystack user
sudo su - deploystack
cd /opt/deploystack/deploystack/services/satellite

# Pull latest code
git pull

# Rebuild
npm install
npm run build

# Exit back to root
exit

# Start service
sudo systemctl start deploystack-satellite

# Verify
sudo systemctl status deploystack-satellite

Security Considerations

nsjail Isolation

Production satellites use nsjail to provide:
  • PID Namespace Isolation: Each team’s MCP servers run in separate process trees
  • Mount Namespace Isolation: Isolated filesystem view per team
  • IPC Namespace Isolation: Separate inter-process communication
  • UTS Namespace Isolation: Each team gets unique hostname (mcp-)

Resource Limits

Each MCP server process is limited to:
  • Virtual Memory: unlimited (rlimit_as = inf — required because Node.js v24 uses WASM internally which reserves ~10GB of virtual address space; this is virtual, not physical RAM)
  • Physical Memory: 512MB via cgroup (only active when Delegate=yes is set in the systemd unit — see below)
  • CPU Time: 60 seconds (enforced via rlimit_cpu)
  • Processes: 1000 (enforced via rlimit_nproc and cgroup pids.max, required for package managers like npm and uvx)
  • File Descriptors: 1024 (enforced via rlimit_nofile)
  • Maximum File Size: 50MB (enforced via rlimit_fsize)
  • tmpfs /tmp: 100MB (enforced via tmpfs mount)
Cgroup limits are auto-detected: The satellite automatically detects whether cgroup v2 is available and delegated. When running as a systemd service with Delegate=yes, physical memory (512MB) and PID limits are enforced via cgroup in addition to rlimits. Without Delegate=yes, the satellite falls back to rlimit-only mode — nsjail still runs safely with full namespace isolation. See the Enable Cgroup Limits section below to activate precise physical memory enforcement.
Primary Security = Namespace Isolation: The satellite’s security model relies on Linux namespaces (PID, Mount, User, IPC, UTS) to isolate MCP servers from each other and the host system. Resource limits (rlimits) provide secondary DoS protection. With user namespace active, all privilege escalation attacks (including setuid-based rlimit bypasses) are prevented.

Network Security

Configure firewall rules for production:
# Allow only backend communication (satellite polls backend)
# No inbound rules needed - satellite uses outbound polling

# Optional: Allow local status checks
sudo ufw allow from 127.0.0.1 to any port 3001

# If you need external access to satellite (not recommended)
sudo ufw allow 3001/tcp

Troubleshooting

Service Won’t Start

Check logs for errors:
sudo journalctl -u deploystack-satellite -n 100
sudo tail -50 /var/log/deploystack-satellite/error.log
Common issues:
  • Missing registration token in .env
  • Invalid satellite name format
  • Backend URL unreachable
  • Port 3001 already in use

nsjail Spawning Failures

Symptoms:
  • MCP servers fail to spawn
  • Errors mentioning “clone” or “namespace”
Check kernel setting:
cat /proc/sys/kernel/unprivileged_userns_clone
# Must return: 1
Verify nsjail installation:
nsjail --version
which nsjail

Registration Fails

Check registration token:
# View current token in .env (be careful - this is sensitive)
sudo -u deploystack grep REGISTRATION_TOKEN /opt/deploystack/deploystack/services/satellite/.env
Common registration issues:
  • Token expired (global tokens expire after 1 hour)
  • Token already used (tokens are single-use)
  • Backend URL incorrect or unreachable
  • Network connectivity issues
Test backend connectivity:
curl -I https://cloud.deploystack.io

High Memory Usage

Check process memory:
# View satellite memory usage
sudo systemctl status deploystack-satellite | grep Memory

# View all MCP server processes
ps aux | grep node | grep deploystack
Adjust idle timeout to terminate unused processes faster:
# Edit .env file
sudo -u deploystack nano /opt/deploystack/deploystack/services/satellite/.env

# Change MCP_PROCESS_IDLE_TIMEOUT_SECONDS to lower value (e.g., 60)
# Restart service
sudo systemctl restart deploystack-satellite

Port Already in Use

Find what’s using port 3001:
sudo lsof -i :3001
sudo ss -tlnp | grep :3001
Change satellite port:
# Edit .env file
sudo -u deploystack nano /opt/deploystack/deploystack/services/satellite/.env

# Change PORT=3001 to another port
# Restart service
sudo systemctl restart deploystack-satellite

Monitoring and Maintenance

Log Management

View current logs:
# Satellite logs
sudo tail -f /var/log/deploystack-satellite/satellite.log

# Error logs
sudo tail -f /var/log/deploystack-satellite/error.log

# All logs
sudo tail -f /var/log/deploystack-satellite/*.log
Check log disk usage:
sudo du -sh /var/log/deploystack-satellite
Manual log cleanup:
# Remove logs older than 7 days
sudo find /var/log/deploystack-satellite -name "*.log" -mtime +7 -delete

Health Monitoring

Set up automated health checks:
# Create health check script
sudo tee /usr/local/bin/check-satellite-health > /dev/null << 'EOF'
#!/bin/bash
if systemctl is-active --quiet deploystack-satellite; then
    if curl -sf http://localhost:3001/api/status/backend > /dev/null; then
        echo "OK"
        exit 0
    else
        echo "WARN: Service running but not responding"
        exit 1
    fi
else
    echo "ERROR: Service not running"
    exit 2
fi
EOF

sudo chmod +x /usr/local/bin/check-satellite-health

# Test health check
sudo /usr/local/bin/check-satellite-health

Performance Monitoring

Monitor satellite performance metrics:
# CPU and memory usage
top -p $(pgrep -f "deploystack-satellite")

# Detailed process information
sudo systemctl status deploystack-satellite

# Network connections
sudo ss -tn | grep :3001

Production Best Practices

Backup Configuration

Regularly backup your satellite configuration:
# Backup persistent data and configuration
sudo tar czf /opt/backups/satellite-backup-$(date +%Y%m%d).tar.gz \
  /opt/deploystack/deploystack/services/satellite/.env \
  /opt/deploystack/deploystack/services/satellite/persistent_data

Update Strategy

  1. Test updates in staging environment first
  2. Schedule maintenance windows for updates
  3. Keep backup of previous working version
  4. Monitor logs closely after updates

Security Auditing

Regularly review:
  • Systemd service permissions
  • Log file permissions
  • Environment file security (.env should be 600)
  • User and group ownership

Capacity Planning

Monitor and plan for:
  • Number of active MCP server processes
  • Memory usage per team
  • Log disk usage growth
  • Network bandwidth for backend communication

Enable Cgroup Limits

By default the satellite runs in rlimit-only mode. Adding Delegate=yes to the systemd unit gives the satellite ownership of its cgroup subtree, which activates precise physical memory (512MB) and PID enforcement per MCP process. No code changes are needed — the satellite auto-detects cgroup availability at startup.

1. Modify Systemd Service File

Edit /etc/systemd/system/deploystack-satellite.service and add Delegate=yes:
[Service]
Type=simple
User=deploystack
Group=deploystack
Delegate=yes  # ← ADD THIS LINE
WorkingDirectory=/opt/deploystack/deploystack/services/satellite
...

2. Reload and Restart Service

sudo systemctl daemon-reload
sudo systemctl restart deploystack-satellite

3. Verify Cgroup Limits Are Active

Check the startup log for confirmation:
sudo grep "cgroup_detection" /var/log/deploystack-satellite/satellite.log
You should see a line like:
Cgroup v2 available at /sys/fs/cgroup/system.slice/deploystack-satellite.service — memory/PID limits will be enforced
If you see Cgroup v2 unavailable instead, verify that Delegate=yes is in the service file and that you reloaded systemd. You can also check active limits on a running MCP process:
# Find a running MCP process PID
ps aux | grep "npx.*mcp"

# Check its cgroup assignment (replace {pid} with actual PID)
cat /proc/{pid}/cgroup

# Check enforced limits
cat /sys/fs/cgroup/system.slice/deploystack-satellite.service/NSJAIL.*/memory.max
cat /sys/fs/cgroup/system.slice/deploystack-satellite.service/NSJAIL.*/pids.max
Cgroup limits are optional. The rlimit-only default provides strong security through namespace isolation and adequate DoS protection. Cgroup limits add precise physical memory enforcement per MCP process, which is useful in high-density multi-team environments where a single runaway process consuming all RAM would otherwise affect other teams.

Next Steps


Need help? Join our Discord community or check GitHub Issues for support.