Architecture Overview
Per-User Instance Model
DeployStack follows a per-user instance architecture:- Installation: MCP server installed for a team (row in
mcpServerInstallations) - Instance: Per-user running process with merged config (row in
mcpServerInstances) - ProcessId: Unique identifier for each instance
ProcessId Format
Each instance has a unique ProcessId that includes the user identifier:- Unique process identification across all users and teams
- User-specific process routing via OAuth token
- Independent lifecycle management per user
Independent Status Tracking
Each user’s instance has independent status tracking:- Status exists ONLY in
mcpServerInstancestable - No installation-level status aggregation across users
- Each user sees only their own instance status
- Other team members’ status doesn’t affect your tools
Lifecycle Process A: MCP Server Installation
Trigger: Team admin installs MCP server for the teamBackend Operations
Create Installation Record
mcpServerInstallations row (team-level installation record)Create Admin Instance
mcpServerInstances row for the installing admin:installation_id→ installation.iduser_id→ admin.idstatus→ ‘provisioning’ (or ‘awaiting_user_config’ if admin didn’t provide required user fields)
Provision Other Team Members
mcpServerInstances row:installation_id→ installation.iduser_id→ member.idstatus→ ‘provisioning’ (or ‘awaiting_user_config’ if server requires user-level config)
Send Satellite Command
configure command to all global satellites (priority: immediate):Satellite Operations
Receive Command
Fetch Configurations
Spawn Processes
awaiting_user_config status)Emit Status Events
user_id field:Progress Through States
provisioning → connecting → discovering_tools → syncing_tools → onlineResult
- Each team member gets their own instance with independent status
- Members who provided config can use MCP server immediately
- Members without required user-level config remain in
awaiting_user_configstatus until they configure
status='awaiting_user_config'. The satellite does NOT spawn processes for these instances until the user completes their configuration. See Status Tracking for details.Lifecycle Process B: MCP Server Deletion
Trigger: Team admin deletes MCP installationBackend Operations
Delete Installation
mcpServerInstallations rowCASCADE Delete Instances
mcpServerInstances rows:Send Satellite Command
configure command to all global satellites:Satellite Operations
Receive Command
Terminate All Processes
Clean Up State
Remove from Cache
Result
- All instances deleted from database
- All processes terminated on satellites
- No orphaned processes or database rows
- Complete cleanup across all team members
Lifecycle Process C: Team Member Added
Trigger: Team admin adds new member to teamBackend Operations
Create Membership
Query Team Installations
Create Instances
mcpServerInstances row:installation_id→ installation.iduser_id→ new_member.idstatus→ ‘provisioning’ (or ‘awaiting_user_config’ if server requires user-level config)
Send Satellite Commands
configure command to all global satellites (one per installation):Satellite Operations
Receive Commands
Fetch Updated Configs
Spawn Processes
awaiting_user_config instances)Emit Status Events
user_idAwait First Connection
Result
- New member has instances for ALL team MCP servers
- Processes spawn on demand when member makes first request
- Each instance has independent status (no aggregation)
- Member must configure required user-level fields before instances become online
Lifecycle Process D: Team Member Removed
Trigger: Team admin removes member from teamBackend Operations
Delete Member Instances
mcpServerInstances rows for that user in this team:Send Satellite Command
configure command to all global satellites:Emit Backend Event
TEAM_MEMBER_REMOVED (audit trail and notifications)Satellite Operations
Receive Command
Terminate Member Processes
Clean Up State
Remove from Runtime
Result
- All member’s instances deleted from database
- All member’s processes terminated on satellites
- No status recalculation needed (status only exists per-instance)
- Other team members’ instances remain unaffected
Status Tracking Design
Per-User Status Only
Status fields have been completely removed frommcpServerInstallations table. Status exists ONLY in mcpServerInstances:
API Behavior
Status Endpoints:GET /teams/:teamId/mcp/installations/:installationId/status- Returns authenticated user’s instance status onlyGET /teams/:teamId/mcp/installations/:installationId/status-stream- SSE stream of user’s instance status changes- No installation-level status aggregation across users
Why No Aggregation?
- Each user has independent instance with independent status
- Admin seeing “online” doesn’t mean other users’ instances are online
- User’s config changes only affect their own instance status
- Simpler architecture - single source of truth per user
Database Schema
Status Location:mcpServerInstances: Has status fields (per user) ✅mcpServerInstallations: NO status fields (removed) ❌
Error Handling and Edge Cases
Scenario: Satellite sends status for non-existent instance
Behavior:- Backend logs error: “Instance not found for status update”
- No auto-creation (strict validation)
- Requires manual investigation and instance creation
- Database instance deleted but satellite still has process running
- Timing issue between deletion and process termination
- Network delay in command delivery
Scenario: Member removed while instance is online
Behavior:- Backend deletes instance row first
- Satellite terminates process on next configure command poll
- Brief window where process runs without database record (acceptable)
- Process terminated within polling interval (2-60 seconds depending on priority)
- No data loss or security issue
- Graceful shutdown when command received
Scenario: Installation deleted with online instances
Behavior:- CASCADE delete removes all instances immediately
- Satellite terminates all processes on next poll
- Status events ignored (instances already deleted)
- Clean database state (no orphaned instances)
- Processes cleaned up automatically
- All team members’ access revoked simultaneously
Scenario: Team member added but instance creation fails
Behavior:- Log error, continue with other installations
- Member addition succeeds (instances can be created manually later)
- No rollback - partial instance creation is acceptable
- Team membership is independent of MCP instances
- Failed instance creation shouldn’t block member from joining
- Manual retry available via admin interface
Scenario: Satellite offline during member add
Behavior:- Instance rows created with status ‘provisioning’
- Satellite picks up on next heartbeat/command poll
- Eventually spawns processes for new member
- Satellite comes online → polls backend
- Receives configure commands for new member
- Processes spawn as normal
- Status progresses to online

