MCPOmni Connect is an agent execution runtime that connects to multiple MCP servers via stdio, SSE, or streamable HTTP. It supports chat, autonomous agents, and planner-based orchestration treating each MCP server as a tool agent to enable dynamic multi-agent workflows across LLM systems, and can be embedded in FastAPI or backend systems.
MCPOmni Connect is a powerful, intelligent AI agent system and universal command-line interface (CLI) that goes beyond being just a gateway to the Model Context Protocol (MCP) ecosystem. It acts as an autonomous agent through its ReAct Agent Mode and Orchestrator Mode, capable of independent reasoning, decision-making, and complex task execution. It seamlessly integrates multiple MCP servers, AI models, and various transport protocols into a unified, intelligent interface that can operate autonomously or interactively.
🚀 New User? Start with the ⚙️ Configuration Guide to understand the difference between config files, transport types, and OAuth behavior. Then check out the 🧪 Testing section to get started quickly.
MCPOmni Connect
├── Transport Layer
│ ├── Stdio Transport
│ ├── SSE Transport
│ └── Docker Integration
├── Session Management
│ ├── Multi-Server Orchestration
│ └── Connection Lifecycle Management
├── Tool Management
│ ├── Dynamic Tool Discovery
│ ├── Cross-Server Tool Routing
│ └── Tool Execution Engine
└── AI Integration
├── LLM Processing
├── Context Management
└── Response Generation
# with uv recommended
uv add mcpomni-connect
# using pip
pip install mcpomni-connect
# Set up environment variables
echo "LLM_API_KEY=your_key_here" > .env
# Optional: Configure Redis connection
echo "REDIS_HOST=localhost" >> .env
echo "REDIS_PORT=6379" >> .env
echo "REDIS_DB=0" >> .env"
# Configure your servers in servers_config.json
MCPOmni Connect uses two separate configuration files for different purposes:
.env
File - Environment VariablesContains sensitive information like API keys and optional settings:
# Required: Your LLM provider API key
LLM_API_KEY=your_api_key_here
# Optional: Redis configuration (for persistent memory)
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_DB=0
#### 2. `servers_config.json` - Server & Agent Configuration
Contains application settings, LLM configuration, and MCP server connections:
```json
{
"AgentConfig": {
"tool_call_timeout": 30,
"max_steps": 15,
"request_limit": 1000,
"total_tokens_limit": 100000
},
"LLM": {
"provider": "openai",
"model": "gpt-4o-mini",
"temperature": 0.5,
"max_tokens": 5000,
"top_p": 0.7
},
"mcpServers": {
"your-server-name": {
"transport_type": "stdio",
"command": "uvx",
"args": ["mcp-server-package"]
}
}
}
MCPOmni Connect supports multiple ways to connect to MCP servers:
Use when: Connecting to local MCP servers that run as separate processes
{
"server-name": {
"transport_type": "stdio",
"command": "uvx",
"args": ["mcp-server-package"]
}
}
Use when: Connecting to HTTP-based MCP servers using Server-Sent Events
{
"server-name": {
"transport_type": "sse",
"url": "http://your-server.com:4010/sse",
"headers": {
"Authorization": "Bearer your-token"
},
"timeout": 60,
"sse_read_timeout": 120
}
}
Use when: Connecting to HTTP-based MCP servers with or without OAuth
Without OAuth (Bearer Token):
{
"server-name": {
"transport_type": "streamable_http",
"url": "http://your-server.com:4010/mcp",
"headers": {
"Authorization": "Bearer your-token"
},
"timeout": 60
}
}
With OAuth:
{
"server-name": {
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://your-server.com:4010/mcp"
}
}
http://localhost:3000
Important: When using OAuth authentication, MCPOmni Connect automatically starts an OAuth callback server.
🖥️ Started callback server on http://localhost:3000
http://localhost:3000
is hardcoded and cannot be changed"auth": {"method": "oauth"}
in your config"auth"
section from your server configuration"headers"
with "Authorization": "Bearer token"
insteadPossible Causes & Solutions:
Wrong Transport Type
Problem: Your server expects 'stdio' but you configured 'streamable_http'
Solution: Check your server's documentation for the correct transport type
OAuth Configuration Mismatch
Problem: Your server doesn't support OAuth but you have "auth": {"method": "oauth"}
Solution: Remove the "auth" section entirely and use headers instead:
"headers": {
"Authorization": "Bearer your-token"
}
Server Not Running
Problem: The MCP server at the specified URL is not running
Solution: Start your MCP server first, then connect with MCPOmni Connect
Wrong URL or Port
Problem: URL in config doesn't match where your server is running
Solution: Verify the server's actual address and port
Yes, this is completely normal when:
"auth": {"method": "oauth"}
in any server configurationIf you don't want the OAuth server:
"auth": {"method": "oauth"}
from all server configurations{
"mcpServers": {
"local-tools": {
"transport_type": "stdio",
"command": "uvx",
"args": ["mcp-server-tools"]
}
}
}
{
"mcpServers": {
"remote-api": {
"transport_type": "streamable_http",
"url": "http://api.example.com:8080/mcp",
"headers": {
"Authorization": "Bearer abc123token"
}
}
}
}
{
"mcpServers": {
"oauth-server": {
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://oauth-server.com:8080/mcp"
}
}
}
# start the cli running the command ensure your api key is exported or create .env
mcpomni_connect
# Run all tests with verbose output
pytest tests/ -v
# Run specific test file
pytest tests/test_specific_file.py -v
# Run tests with coverage report
pytest tests/ --cov=src --cov-report=term-missing
tests/
├── unit/ # Unit tests for individual components
Installation
# Clone the repository
git clone https://github.com/Abiorh001/mcp_omni_connect.git
cd mcp_omni_connect
# Create and activate virtual environment
uv venv
source .venv/bin/activate
# Install dependencies
uv sync
Configuration
# Set up environment variables
echo "LLM_API_KEY=your_key_here" > .env
# Configure your servers in servers_config.json
** Start Client**
# Start the client
uv run run.py
# or
python run.py
You can run the basic CLI example to interact with MCPOmni Connect directly from the terminal.
Using uv (recommended):
uv run examples/basic.py
Or using Python directly:
python examples/basic.py
You can also run MCPOmni Connect as a FastAPI server for web or API-based interaction.
Using uv:
uv run examples/fast_api_iml.py
Or using Python directly:
python examples/fast_api_iml.py
A simple web client is provided in examples/index.html
.
http://localhost:8000
and provides a chat interface.http://localhost:8000
by default.examples/index.html
for a simple web client)./chat/agent_chat
(POST){
"query": "Your question here",
"chat_id": "unique-chat-id"
}
{
"message_id": "...",
"usid": "...",
"role": "assistant",
"content": "Response text",
"meta": [],
"likeordislike": null,
"time": "2024-06-10 12:34:56"
}
MCPOmni Connect is not just a CLI tool—it's also a powerful Python library that you can use to build your own backend services, custom clients, or API servers.
You can import MCPOmni Connect in your Python project to:
See examples/fast_api_iml.py
for a full-featured example.
Minimal Example:
from mcpomni_connect.client import Configuration, MCPClient
from mcpomni_connect.llm import LLMConnection
from mcpomni_connect.agents.react_agent import ReactAgent
from mcpomni_connect.agents.orchestrator import OrchestratorAgent
config = Configuration()
client = MCPClient(config)
llm_connection = LLMConnection(config)
# Choose agent mode
agent = ReactAgent(...) # or OrchestratorAgent(...)
# Use in your API endpoint
response = await agent.run(
query="Your user query",
sessions=client.sessions,
llm_connection=llm_connection,
# ...other arguments...
)
You can easily expose your MCP client as an API using FastAPI.
See the FastAPI example for:
Key Features for Developers:
{
"AgentConfig": {
"tool_call_timeout": 30,
"max_steps": 15,
"request_limit": 1000,
"total_tokens_limit": 100000
},
"LLM": {
"provider": "openai",
"model": "gpt-4",
"temperature": 0.5,
"max_tokens": 5000,
"max_context_length": 30000,
"top_p": 0
},
"mcpServers": {
"ev_assistant": {
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://localhost:8000/mcp"
},
"sse-server": {
"transport_type": "sse",
"url": "http://localhost:3000/sse",
"headers": {
"Authorization": "Bearer token"
},
"timeout": 60,
"sse_read_timeout": 120
},
"streamable_http-server": {
"transport_type": "streamable_http",
"url": "http://localhost:3000/mcp",
"headers": {
"Authorization": "Bearer token"
},
"timeout": 60,
"sse_read_timeout": 120
}
}
}
{
"LLM": {
"provider": "anthropic",
"model": "claude-3-5-sonnet-20241022",
"temperature": 0.7,
"max_tokens": 4000,
"max_context_length": 200000,
"top_p": 0.95
}
}
{
"LLM": {
"provider": "groq",
"model": "llama-3.1-8b-instant",
"temperature": 0.5,
"max_tokens": 2000,
"max_context_length": 8000,
"top_p": 0.9
}
}
{
"LLM": {
"provider": "azureopenai",
"model": "gpt-4",
"temperature": 0.7,
"max_tokens": 2000,
"max_context_length": 100000,
"top_p": 0.95,
"azure_endpoint": "https://your-resource.openai.azure.com",
"azure_api_version": "2024-02-01",
"azure_deployment": "your-deployment-name"
}
}
{
"LLM": {
"provider": "ollama",
"model": "llama3.1:8b",
"temperature": 0.5,
"max_tokens": 5000,
"max_context_length": 100000,
"top_p": 0.7,
"ollama_host": "http://localhost:11434"
}
}
{
"LLM": {
"provider": "openrouter",
"model": "anthropic/claude-3.5-sonnet",
"temperature": 0.7,
"max_tokens": 4000,
"max_context_length": 200000,
"top_p": 0.95
}
}
MCPOmni Connect supports multiple authentication methods for secure server connections:
{
"server_name": {
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://your-server/mcp"
}
}
{
"server_name": {
"transport_type": "streamable_http",
"headers": {
"Authorization": "Bearer your-token-here"
},
"url": "http://your-server/mcp"
}
}
{
"server_name": {
"transport_type": "streamable_http",
"headers": {
"X-Custom-Header": "value",
"Authorization": "Custom-Auth-Scheme token"
},
"url": "http://your-server/mcp"
}
}
MCPOmni Connect supports dynamic server configuration through commands:
# Add one or more servers from a configuration file
/add_servers:path/to/config.json
The configuration file can include multiple servers with different authentication methods:
{
"new-server": {
"transport_type": "streamable_http",
"auth": {
"method": "oauth"
},
"url": "http://localhost:8000/mcp"
},
"another-server": {
"transport_type": "sse",
"headers": {
"Authorization": "Bearer token"
},
"url": "http://localhost:3000/sse"
}
}
# Remove a server by its name
/remove_server:server_name
/tools
- List all available tools across servers/prompts
- View available prompts/prompt:<name>/<args>
- Execute a prompt with arguments/resources
- List available resources/resource:<uri>
- Access and analyze a resource/debug
- Toggle debug mode/refresh
- Update server capabilities/memory
- Toggle Redis memory persistence (on/off)/mode:auto
- Switch to autonomous agentic mode/mode:chat
- Switch back to interactive chat mode/add_servers:<config.json>
- Add one or more servers from a configuration file/remove_server:<server_name>
- Remove a server by its name# Enable Redis memory persistence
/memory
# Check memory status
Memory persistence is now ENABLED using Redis
# Disable memory persistence
/memory
# Check memory status
Memory persistence is now DISABLED
# Switch to autonomous mode
/mode:auto
# System confirms mode change
Now operating in AUTONOMOUS mode. I will execute tasks independently.
# Switch back to chat mode
/mode:chat
# System confirms mode change
Now operating in CHAT mode. I will ask for approval before executing tasks.
Chat Mode (Default)
Autonomous Mode
Orchestrator Mode
# List all available prompts
/prompts
# Basic prompt usage
/prompt:weather/location=tokyo
# Prompt with multiple arguments depends on the server prompt arguments requirements
/prompt:travel-planner/from=london/to=paris/date=2024-03-25
# JSON format for complex arguments
/prompt:analyze-data/{
"dataset": "sales_2024",
"metrics": ["revenue", "growth"],
"filters": {
"region": "europe",
"period": "q1"
}
}
# Nested argument structures
/prompt:market-research/target=smartphones/criteria={
"price_range": {"min": 500, "max": 1000},
"features": ["5G", "wireless-charging"],
"markets": ["US", "EU", "Asia"]
}
The client intelligently:
MCPOmni Connect now provides advanced controls and visibility over your API usage and resource limits.
Use the /api_stats
command to see your current usage:
/api_stats
This will display:
You can set limits to automatically stop execution when thresholds are reached:
You can configure these in your servers_config.json
under the AgentConfig
section:
"AgentConfig": {
"tool_call_timeout": 30, // Tool call timeout in seconds
"max_steps": 15, // Max number of steps before termination
"request_limit": 1000, // Max number of requests allowed
"total_tokens_limit": 100000 // Max number of tokens allowed
}
# Check your current API usage and limits
/api_stats
# Set a new request limit (example)
# (This can be done by editing servers_config.json or via future CLI commands)
# Example of automatic tool chaining if the tool is available in the servers connected
User: "Find charging stations near Silicon Valley and check their current status"
# Client automatically:
1. Uses Google Maps API to locate Silicon Valley
2. Searches for charging stations in the area
3. Checks station status through EV network API
4. Formats and presents results
# Automatic resource processing
User: "Analyze the contents of /path/to/document.pdf"
# Client automatically:
1. Identifies resource type
2. Extracts content
3. Processes through LLM
4. Provides intelligent summary
📖 For comprehensive configuration help, see the ⚙️ Configuration Guide section above, which covers:
- Config file differences (
.env
vsservers_config.json
)- Transport type selection and authentication
- OAuth server behavior explanation
- Common connection issues and solutions
Connection Issues
Error: Could not connect to MCP server
servers_config.json
API Key Issues
Error: Invalid API key
.env
Redis Connection
Error: Could not connect to Redis
.env
Tool Execution Failures
Error: Tool execution failed
Enable debug mode for detailed logging:
/debug
For additional support, please:
We welcome contributions! See our Contributing Guide for details.
This project is licensed under the MIT License - see the LICENSE file for details.
Built with ❤️ by the MCPOmni Connect Team
{ "mcpServers": { "mcpomniconnect": { "command": "uvx", "args": [ "mcp-server-package" ] } } }
Related projects feature coming soon
Will recommend related projects based on sub-categories