Skip to main content
DeployStack Satellite implements automatic tool and resource discovery from MCP servers across both HTTP/SSE remote endpoints and stdio subprocess servers. This unified system provides dynamic tool and resource availability without manual configuration.
Current Implementation: Tool and resource discovery fully supports both HTTP/SSE remote MCP servers and stdio subprocess servers through a unified architecture. The UnifiedToolDiscoveryManager coordinates tool discovery across both transport types, while UnifiedResourceDiscoveryManager handles resource discovery alongside tools.This document focuses on the internal discovery mechanism. To learn how tools and resources are exposed to MCP clients through the hierarchical router pattern, see Hierarchical Router Implementation.
The overall satellite architecture is documented in Satellite Architecture Design. MCP transport protocol details can be found in MCP Transport Protocols.

Technical Overview

Unified Discovery Architecture

Tool discovery operates through three coordinated managers that handle different transport types and merge results:
┌─────────────────────────────────────────────────────────────────────────────────┐
│                   Unified Tool & Resource Discovery Architecture                │
│                                                                                 │
│  ┌─────────────────────────────────────────────────────────────────┐           │
│  │              UnifiedToolDiscoveryManager                        │           │
│  │                                                                 │           │
│  │  • Coordinates both HTTP and stdio discovery                   │           │
│  │  • Merges tools from both managers                            │           │
│  │  • Wires resource discovery callbacks                         │           │
│  │  • Single interface for MCP clients                           │           │
│  └─────────────────────────────────────────────────────────────────┘           │
│                            │                  │                                 │
│                            ▼                  ▼                                 │
│  ┌───────────────────────────┐    ┌───────────────────────────┐               │
│  │ RemoteToolDiscoveryManager │    │ StdioToolDiscoveryManager │               │
│  │                           │    │                           │               │
│  │ • HTTP/SSE servers        │    │ • stdio subprocesses      │               │
│  │ • Startup discovery       │    │ • Post-spawn discovery    │               │
│  │ • SSE parsing             │    │ • JSON-RPC over stdin/out │               │
│  │ • Discovers resources too │    │ • Discovers resources too │               │
│  └───────────────────────────┘    └───────────────────────────┘               │
│                            │                  │                                 │
│                            ▼                  ▼                                 │
│  ┌─────────────────────────────────────────────────────────────────┐           │
│  │            UnifiedResourceDiscoveryManager                      │           │
│  │                                                                 │           │
│  │  • Caches resource metadata (URI, name, mimeType, _meta)      │           │
│  │  • URI namespacing with pipe separator (serverSlug|uri)        │           │
│  │  • Preserves _meta for MCP Apps support                        │           │
│  │  • Content never cached (proxied on-demand)                    │           │
│  └─────────────────────────────────────────────────────────────────┘           │
└─────────────────────────────────────────────────────────────────────────────────┘

Core Components

UnifiedToolDiscoveryManager:
  • Coordinates tool discovery across both HTTP/SSE and stdio transport types
  • Merges discovered tools from both managers into a unified cache
  • Wires resource discovery callbacks to UnifiedResourceDiscoveryManager
  • Routes discovery requests to the appropriate transport type
  • Provides single interface for MCP protocol handlers
RemoteToolDiscoveryManager:
  • Queries remote HTTP/SSE MCP servers during startup
  • Parses Server-Sent Events responses
  • Discovers resources alongside tools (same connection, extra JSON-RPC call)
  • Maintains in-memory cache with namespacing
  • Handles differential configuration updates
StdioToolDiscoveryManager:
  • Discovers tools from stdio subprocess MCP servers
  • Executes discovery after process spawn and handshake
  • Discovers resources alongside tools via resources/list and resources/templates/list
  • Tools persist in cache even when processes go dormant
  • Tracks tools by server with namespacing
UnifiedResourceDiscoveryManager:
  • Caches resource metadata (URI, name, description, mimeType, _meta) from both transport types
  • URI namespacing with pipe separator: serverSlug|originalUri
  • Preserves _meta fields for MCP Apps support
  • Resource content is never cached — always proxied on-demand via McpResourceExecutor
  • Shares server status map with UnifiedToolDiscoveryManager for per-user filtering

Discovery Process by Transport Type

HTTP/SSE Discovery (Startup)

Remote HTTP/SSE servers are discovered during satellite initialization:
Startup → Config Load → HTTP Servers → Query tools/list → Cache Tools → Ready
    │           │             │               │               │          │
 Init      Enabled Only   POST Request    Parse Response   Namespace   Serve
HTTP Discovery Flow:
  1. Load enabled HTTP/SSE servers from dynamic configuration
  2. Query each server with tools/list JSON-RPC request
  3. Parse SSE or JSON responses
  4. Cache tools with namespacing (server_slug-tool_name)
  5. Expose through MCP transport endpoints

stdio Discovery (Post-Spawn)

stdio subprocess servers are discovered after process spawning:
Process Spawn → Handshake → Running → Discover Tools → Cache → Auto-Cleanup
      │             │          │            │           │           │
  Backend Cmd   Initialize   Status     tools/list   Namespace   On Exit
stdio Discovery Flow:
  1. Process spawned via Backend command
  2. MCP handshake completes (initialize + initialized)
  3. Discovery triggered automatically after handshake
  4. Tools cached with namespacing (server_slug-tool_name)
  5. Tools persist in cache even when process terminates (for fast respawn)

Discovery Timing Differences

HTTP/SSE (Eager):
  • Discovered at startup before serving requests
  • All HTTP tools available immediately
  • Configuration changes trigger rediscovery
stdio (Lazy):
  • Discovered after process spawn completes
  • Tools become available post-handshake
  • Tools persist even when process goes dormant (enables fast respawn)

Tool Caching Strategy

Unified Cache Design

Both transport types use identical caching and namespacing:
interface UnifiedCachedTool {
  serverName: string;           // Installation name
  originalName: string;         // Tool name from server
  namespacedName: string;       // server_slug-tool_name
  description: string;          // Tool description
  inputSchema: object;          // JSON Schema
  transport: 'stdio' | 'http';  // Transport type for routing
  discoveredAt?: Date;          // Discovery timestamp (HTTP only)
  _meta?: Record<string, unknown>;  // Preserved metadata for MCP Apps support
}
Cache Characteristics:
  • Unified Namespace: Same format across both transport types
  • Memory Storage: No persistent storage or database
  • Persistent Caching: stdio tools remain cached even when processes go dormant
  • Conflict Prevention: server_slug ensures unique names

Namespacing Strategy

Both HTTP and stdio tools use identical namespacing:
HTTP Tool Example:
  Server Slug: "context7"
  Original: "resolve-library-id"
  Namespaced: "context7-resolve-library-id"

stdio Tool Example:
  Server Slug: "filesystem" (extracted from "filesystem-john-abc123")
  Original: "read_file"
  Namespaced: "filesystem-read_file"
Namespacing Rules:
  • Format: {server_slug}-{originalToolName}
  • HTTP: Uses server_slug from configuration
  • stdio: Extracts slug from installation name
  • Routing: Internal server names used for team isolation
  • User Display: Friendly namespaced names shown to clients
For team-based server resolution, see Team Isolation Implementation.

Resource Discovery

Resources are discovered alongside tools during the same connection to each MCP server. This avoids extra round-trips and ensures resource metadata is always in sync with tool metadata.

Discovery Flow

Both RemoteToolDiscoveryManager and StdioToolDiscoveryManager discover resources after tool discovery completes:
  1. Tool discovery: tools/list JSON-RPC call (existing flow)
  2. Resource discovery: resources/list JSON-RPC call (same connection)
  3. Template discovery: resources/templates/list JSON-RPC call (same connection)
  4. Callback: Results passed to UnifiedResourceDiscoveryManager for caching
Resource discovery is non-fatal — if a server doesn’t support resources (most don’t), discovery silently continues without error.

UnifiedResourceDiscoveryManager

Caches resource metadata with URI namespacing:
interface UnifiedCachedResource {
  serverName: string;           // Installation name
  originalUri: string;          // Original URI from server
  namespacedUri: string;        // serverSlug|originalUri
  name: string;                 // Resource name
  description?: string;         // Resource description
  mimeType?: string;            // Content type
  annotations?: object;         // Resource annotations
  _meta?: Record<string, unknown>;  // Preserved for MCP Apps
  transport: 'stdio' | 'http' | 'sse';
  serverSlug: string;           // Server identifier
  discoveredAt: Date;           // Discovery timestamp
}

URI Namespacing

Resources use a pipe separator (|) instead of the colon (:) used for tools, because resource URIs frequently contain colons:
Tool namespacing:     serverSlug:toolName           → "github:create_issue"
Resource namespacing: serverSlug|originalUri         → "excalidraw|ui://excalidraw/mcp-app.html"

Content Proxying

Resource content is never cached. When read_mcp_resource is called, McpResourceExecutor proxies the request on-demand to the origin MCP server:
  • stdio servers: Sends resources/read JSON-RPC to the subprocess
  • HTTP/SSE servers: Sends resources/read via MCP SDK client (with OAuth token injection if needed)

_meta Preservation

The _meta field is preserved through the entire discovery and response chain using the pattern:
...(item._meta ? { _meta: item._meta } : {})
This is applied at every stage: discovery caching, resource listing, and tool search results. The _meta.ui.resourceUri field is rewritten to use the namespaced URI format so MCP Apps clients can read resources through the hierarchical router.

Configuration Management

Dynamic Configuration Updates

The unified manager handles configuration changes intelligently: Differential Updates:
  • Only discovers tools for added/modified servers
  • Preserves tools for unchanged servers
  • Removes tools for deleted servers
  • Minimizes network overhead and latency
Configuration Sources:
  • HTTP/SSE: Static configuration from Backend polling
  • stdio: Dynamic spawning via Backend commands
  • Both: Support three-tier configuration system

Tool Execution Flow

Transport-Aware Routing

Tool execution routes to the correct transport based on discovery:
MCP Client → tools/call → Parse Name → Lookup Tool → Route by Transport
    │            │            │            │              │
  Request    Namespaced   Extract Slug   Get Cache    stdio/HTTP/SSE
HTTP Transport:
  • Routes to remote HTTP/SSE endpoint
  • Uses HTTP Proxy Manager
  • Handles SSE streaming responses
stdio Transport:
  • Routes to local subprocess
  • Uses ProcessManager JSON-RPC
  • Communicates over stdin/stdout

Error Handling & Recovery

Discovery Failures

Both managers implement graceful failure handling: HTTP Discovery:
  • Server unreachable → Skip and continue
  • Parse errors → Log and skip malformed tools
  • Timeout → Mark server as failed
stdio Discovery:
  • Process not running → Error with status check
  • No tools returned → Empty array (valid response)
  • Communication failure → Process restart logic

Automatic Cleanup

stdio tools persist in cache for optimal performance: Process Lifecycle:
  • Spawn: Tools discovered after handshake
  • Running: Tools available for execution
  • Idle/Dormant: Process terminated, tools remain cached for fast respawn
  • Respawn: Process restarts automatically, tools already available (no rediscovery)
  • Uninstall: Tools cleared only when server is explicitly removed
Idle Process Management: stdio processes that remain inactive for the configured idle timeout (default: 3 minutes) are automatically terminated to save memory. However, tools remain cached so when a client requests them, the process respawns instantly without needing to rediscover tools. This reduces respawn time from 1-3 seconds to 1-2 seconds. See Idle Process Management for details.

Tool Metadata Collection

After tool discovery completes, the satellite emits tool metadata to the backend for storage and analysis.

Event Emission (Post-Discovery)

Following successful tool discovery (both HTTP/SSE and stdio), the satellite:
  1. Calculates token consumption using the token-counter.ts utility
  2. Builds event payload with tool metadata including per-tool token counts
  3. Emits mcp.tools.discovered event to backend via EventBus
  4. Backend stores metadata in mcpToolMetadata table for team visibility
Event Payload Structure:
{
  installation_id: string;
  installation_name: string;
  team_id: string;
  server_slug: string;
  tool_count: number;           // Total tools discovered
  total_tokens: number;         // Sum of all tool token counts
  tools: Array<{
    tool_name: string;
    description: string;
    input_schema: Record<string, unknown>;
    token_count: number;        // Tokens for this specific tool
  }>;
  discovered_at: string;        // ISO 8601 timestamp
}
Integration Points:
  • StdioToolDiscoveryManager: Emits after stdio tool discovery completes
  • RemoteToolDiscoveryManager: Emits after HTTP/SSE tool discovery completes
  • EventBus: Batches events every 3 seconds for efficient transmission
  • Backend handler: Stores tools with delete-then-insert strategy
Token Calculation: The satellite uses estimateMcpServerTokens() from token-counter.ts to calculate:
  • Per-tool tokens: name + description + JSON.stringify(inputSchema)
  • Total server tokens: Sum of all tool tokens
  • Uses gpt-tokenizer library (provider-agnostic)
Purpose:
  • Store tool metadata in backend database for team visibility
  • Calculate hierarchical router token savings (traditional vs 4-meta-tool approach)
  • Enable frontend tool catalog display with token consumption metrics
  • Provide analytics on MCP server complexity and context window usage
For event payload structure and event batching details, see Event Emission - mcp.tools.discovered.

Development Considerations

Debugging Support

The debug endpoint shows tools from both transport types:
curl http://localhost:3001/api/status/debug
Debug Information:
  • Tools grouped by transport type (HTTP/stdio)
  • Tools grouped by server name
  • Discovery statistics for both managers
  • Process status for stdio servers
Security Notice: The debug endpoint exposes detailed system information. Disable in production with DEPLOYSTACK_STATUS_SHOW_MCP_DEBUG_ROUTE=false.

Performance Characteristics

HTTP/SSE Performance

  • Discovery Time: 2-5 seconds at startup
  • Memory: ~1KB per tool
  • Overhead: Single HTTP request per server
  • Caching: Persistent until configuration change

stdio Performance

  • Discovery Time: 1-2 seconds post-spawn (first time only)
  • Memory: ~1KB per tool (persists even when process dormant)
  • Overhead: Single JSON-RPC request per process (cached for respawns)
  • Caching: Persistent - tools remain even when process goes dormant

Scalability

Combined Limits:
  • No hard server limit for either transport
  • Memory-bound by total tool count
  • HTTP: Limited by network connection pool
  • stdio: Limited by system process limits
Implementation Status: Tool discovery is fully operational for both HTTP/SSE remote servers and stdio subprocess servers. The unified manager successfully coordinates discovery, merges tools, and routes execution requests to the appropriate transport.

Future Enhancements

Dynamic Capabilities

Planned Features:
  • Runtime refresh for HTTP servers without restart
  • Configuration hot-reload for both transport types
  • Health monitoring with automatic server detection
  • Tool versioning support

Advanced Features

Under Consideration:
  • Load balancing across multiple server instances
  • Circuit breakers for automatic failure recovery
  • Detailed usage and performance analytics
  • Cache persistence for faster startup (HTTP only)

Status Integration

Tool discovery integrates with the status tracking system to filter tools and enable automatic recovery. Discovery managers call status callbacks on success/failure to update instance status in real-time (per-user).
Per-User Status: Each user’s instance has independent status tracking. Tool filtering is based on the authenticated user’s OWN instance status, not other team members’ statuses.
See Status Tracking - Tool Filtering for complete details on per-user status-based tool filtering and execution blocking.

Recovery System

When offline servers recover, tool discovery is automatically triggered. The satellite preserves existing tools during re-discovery attempts to prevent tool loss on failure. See Recovery System - Recovery Detection for complete recovery logic, retry strategy, and tool preservation implementation.

Tool Metadata Events

Discovered tools are emitted to backend with token count estimates. Event Structure:
eventBus.emit('mcp.tools.discovered', {
  installation_id: string,
  team_id: string,
  tools: [{
    tool_path: string,
    name: string,
    description?: string,
    inputSchema: unknown,
    token_count: number  // Estimated token usage
  }]
});
Token Calculation:
  • Name + description + input schema serialized
  • Estimated using character count / 4 (approximate tokens)
  • Used for analytics and optimization
See Event Emission for complete event types.

Request Logging

Tool execution is logged with full request/response data for debugging. Logged Information:
  • Tool name and input parameters
  • Full MCP server response (captured)
  • Response time in milliseconds
  • Success/failure status and error messages
  • User attribution (who called the tool)
Privacy Control: Request logging can be disabled per-instance via settings.request_logging_enabled = false in the instance configuration. See Log Capture for buffering and storage details. The unified tool discovery implementation provides a solid foundation for multi-transport MCP server integration while maintaining simplicity and reliability for development and production use.