๐ง Advanced Claude streaming interface with interleaved thinking, dynamic tool discovery, and MCP integration. Watch Claude think through problems in real-time while executing tools with live progress updates.
A Python demonstration project showcasing Claude's advanced capabilities through interleaved thinking, fine-grained tool streaming, and dynamic tool discovery with MCP (Model Context Protocol) integration.
Created by Martin Bowling โข GitHub โข Twitter/X
ThinkChain demonstrates the power of Claude's streaming interface with advanced features like:
The system combines local Python tools with MCP servers to create a unified, extensible tool ecosystem that works seamlessly with Claude's streaming capabilities.
uv run
(Recommended)# Clone the repository
git clone https://github.com/martinbowling/ThinkChain.git
cd ThinkChain
# Set up your API key
echo "ANTHROPIC_API_KEY=your_api_key_here" > .env
# Run immediately - uv handles all dependencies automatically!
uv run thinkchain.py # Enhanced UI with rich formatting
uv run thinkchain_cli.py # Minimal CLI version
uv run run.py # Smart launcher (auto-detects best UI)
# Clone and set up
git clone https://github.com/martinbowling/ThinkChain.git
cd ThinkChain
# Install dependencies
uv pip install -r requirements.txt
# or: pip install -r requirements.txt
# Set up your API key
echo "ANTHROPIC_API_KEY=your_api_key_here" > .env
# Run the application
python run.py # Smart launcher
python thinkchain.py # Enhanced UI
python thinkchain_cli.py # CLI version
The core innovation of ThinkChain is how tool execution results are injected back into Claude's thinking stream. When Claude calls a tool:
/tools
directoryrefresh
command to reload tools during developmentuv run
- no virtual environments or dependency installation needed/tools
directory/refresh
command during developmentThe key technical innovation is the tool result injection mechanism:
# Tool results are injected back into Claude's thinking process
async def stream_once(messages, tools):
# Start streaming with thinking enabled
async with client.messages.stream(
model="claude-sonnet-4-20250514",
messages=messages,
tools=tools,
betas=["interleaved-thinking-2025-05-14", "fine-grained-tool-streaming-2025-05-14"],
thinking_budget=1024
) as stream:
async for event in stream:
if event.type == "tool_use":
# Execute tool and inject result back into stream
result = await execute_tool(event.name, event.input)
# This result becomes part of Claude's thinking context
# for the remainder of the response
yield {"type": "tool_result", "content": result}
This creates a feedback loop where:
Local Tools (/tools/*.py) โ Validation โ Registry
โ
MCP Servers (config.json) โ Connection โ Registry โ Unified Tool List โ Claude API
Each tool must implement the BaseTool
interface:
class BaseTool:
@property
def name(self) -> str: ...
@property
def description(self) -> str: ...
@property
def input_schema(self) -> Dict[str, Any]: ...
def execute(self, **kwargs) -> str: ...
User Input โ Claude API โ Thinking Stream โ Tool Detection โ Tool Execution
โ โ
Response โ Thinking Integration โ Tool Result Injection โ Tool Output
/tools
directory)๐ Web & Data Tools:
๐ File & Development Tools:
โ๏ธ Development & Package Management:
Configure in mcp_config.json
:
While chatting with Claude, you can use these slash commands:
/refresh
or /reload
- Refresh tool discovery (picks up new tools)/tools
- Browse all available tools by category/status
- Show comprehensive system status/clear
- Clear screen while preserving status bar/config
- Show current configuration/config model <model_name>
- Switch between Claude models (sonnet/opus)/config thinking <1024-16000>
- Adjust thinking token budget/help
- Show help information/exit
or /quit
- End the conversationLegacy Support: All commands work without the /
prefix for backward compatibility.
Create .env
file:
ANTHROPIC_API_KEY=your_api_key_here
Edit mcp_config.json
:
{
"mcpServers": {
"sqlite": {
"command": "uvx",
"args": ["mcp-server-sqlite", "--db-path", "./database.db"],
"enabled": true
},
"puppeteer": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-puppeteer"],
"enabled": false
}
}
}
The system supports both Claude models with configurable settings:
Available Models:
claude-sonnet-4-20250514
(default) - Fast and efficientclaude-opus-4-20250514
- Most capable, slowerConfigurable Settings:
interleaved-thinking-2025-05-14
, fine-grained-tool-streaming-2025-05-14
Runtime Configuration:
# Change model during conversation
/config model claude-opus-4-20250514
# Increase thinking budget for complex problems
/config thinking 8192
Creating a new tool is straightforward - just follow these steps:
Create a new Python file in the /tools/
directory (e.g., /tools/mytool.py
):
from tools.base import BaseTool
class MyTool(BaseTool):
name = "mytool"
description = """
A detailed description of what your tool does.
Use this tool when users ask about:
- Specific use case 1
- Specific use case 2
- "Keywords that should trigger this tool"
"""
input_schema = {
"type": "object",
"properties": {
"input_param": {
"type": "string",
"description": "Description of this parameter"
},
"optional_param": {
"type": "integer",
"description": "Optional parameter with default",
"default": 10
}
},
"required": ["input_param"]
}
def execute(self, **kwargs) -> str:
input_param = kwargs.get("input_param")
optional_param = kwargs.get("optional_param", 10)
# Your tool logic here
result = f"Processed: {input_param} with {optional_param}"
return result
MyTool
for mytool.py
)tools.base
name
, description
, input_schema
, execute()
methodexecute()
method must return a string result/refresh
command to reload tools without restarting/tools
command to see your new tool listedMCP allows integration with external servers that provide additional tools:
Most MCP servers can be installed via uvx
or npx
:
# Install SQLite MCP server
uvx install mcp-server-sqlite
# Install Puppeteer MCP server (requires Node.js)
npm install -g @modelcontextprotocol/server-puppeteer
# Install Brave Search MCP server
npm install -g @modelcontextprotocol/server-brave-search
Edit mcp_config.json
to add your server:
{
"mcpServers": {
"my-server": {
"command": "uvx",
"args": ["my-mcp-server", "--custom-arg", "value"],
"description": "Description of what this server provides",
"enabled": true,
"env": {
"API_KEY": "your_api_key_if_needed"
}
}
}
}
Popular MCP servers you can integrate:
Data & Storage:
mcp-server-sqlite
- Database operationsmcp-server-postgres
- PostgreSQL integrationmcp-server-redis
- Redis cache operationsWeb & Automation:
@modelcontextprotocol/server-puppeteer
- Browser automation@modelcontextprotocol/server-brave-search
- Web search@modelcontextprotocol/server-filesystem
- File operationsAPIs & Services:
@modelcontextprotocol/server-github
- GitHub integration@modelcontextprotocol/server-slack
- Slack integrationmcp-server-aws
- AWS operationsAfter adding a server, test it:
# Test MCP functionality
python test_mcp.py
# Start ThinkChain and check tools
python thinkchain.py
/tools # Should show both local and MCP tools
# Create tool
vim tools/newtool.py
# Test tool
python thinkchain.py
/refresh # Reload tools
"Use my new tool for X" # Test with Claude
# Iterate and improve
vim tools/newtool.py
/refresh # Reload again
print()
statements in your execute()
method - they'll show in console# Traditional Python execution
python run.py # Smart launcher
python thinkchain.py # Full-featured UI
python thinkchain_cli.py # Minimal dependencies
# Using uv run (handles dependencies automatically)
uv run run.py # Smart launcher
uv run thinkchain.py # Full-featured UI
uv run thinkchain_cli.py # Minimal dependencies
anthropic>=0.25.0
- Claude API clientsseclient-py
- Server-sent events handlingpydantic
- Data validation and schemaspython-dotenv
- Environment variable managementrich
- Terminal formatting and colorsprompt_toolkit
- Interactive command line featuresmcp
- Model Context Protocol clientuvx
or npx
)Some tools require additional packages that are installed automatically:
Web Tools: (weathertool, duckduckgotool, webscrapertool)
requests
- HTTP requestsbeautifulsoup4
- HTML parsingNote: Missing dependencies are handled gracefully - tools that can't import will be skipped during discovery with informative error messages.
All main scripts include inline dependency declarations that make them compatible with uv run
:
#!/usr/bin/env python3
# /// script
# dependencies = [
# "anthropic>=0.25.0",
# "sseclient-py",
# "pydantic",
# "python-dotenv",
# "rich>=13.0.0",
# "requests",
# "beautifulsoup4",
# "mcp",
# "httpx",
# ]
# ///
Benefits of uv run
:
# All these work immediately after cloning:
uv run thinkchain.py # Enhanced UI with all features
uv run thinkchain_cli.py # Minimal CLI version
uv run run.py # Smart launcher
uv run test_mcp.py # Test MCP integration
This project is designed to be forked and extended! Here are some ideas:
The process is straightforward with uv run
:
# Fork and clone
git clone https://github.com/yourusername/your-thinkchain-fork.git
cd your-thinkchain-fork
# Set up API key
echo "ANTHROPIC_API_KEY=your_key" > .env
# Create your first tool
vim tools/yourtool.py
# Test immediately with uv run (no setup needed!)
uv run thinkchain.py
/refresh # Loads your new tool
"Use my new tool for X" # Test with Claude
MIT License - see the LICENSE file for details.
Ready to think differently about AI tool integration? Fork ThinkChain and start building! ๐
{ "mcpServers": { "thinkchain": { "command": "uvx", "args": [ "mcp-server-sqlite", "--db-path", "./database.db" ] } } }
Related projects feature coming soon
Will recommend related projects based on sub-categories