This tool is a cutting-edge memory engine that blends real-time learning, persistent three-tier context awareness, and seamless plug-n-play LLM integration to continuously evolve and enrich your AI’s intelligence.
You are connected to the @henryhawke/mcp-titan MCP server. Use the tools exactly as documented in docs/llm-system-prompt.md. No human intervention is required except for adding the mcp-titan llm-system-prompt rule to the client.
- Always use the MCP tools for all memory, training, and state operations.
- Always initialize the model with `init_model` before using any other tool.
- Use `help` to discover available tools and their parameter schemas.
- Use `save_checkpoint` and `load_checkpoint` to persist and restore memory state.
- Use `reset_gradients` if you encounter training instability or errors.
- Use `prune_memory` when memory capacity drops below 30%.
- Always check tool responses for errors (`isError: true` or `type: "error"`) and handle them as documented.
- Follow all best practices and error handling as described in docs/llm-system-prompt.md.
- Do not use any implementation details or code not exposed by the server.
- Reference docs/llm-system-prompt.md for the latest schemas and usage examples.
This prompt is copy-pastable and should be used as the system prompt for any LLM (Cursor, Claude, or other MCP-compliant clients) to ensure correct and robust operation with MCP Titan.
git clone https://github.com/henryhawke/mcp-titan.git
cd titan-memory
npm install
npm run build
npm start
The server will start and listen for MCP tool requests. By default, it runs on port 8080 (or as configured in your environment).
http://localhost:8080
(or your configured host/port).You can use the provided tool APIs (see below) or connect via Cursor/Claude to verify memory operations.
A neural memory system for LLMs that can learn and predict sequences while maintaining state through a memory vector. This MCP (Model Context Protocol) server provides tools for Claude 3.7 Sonnet and other LLMs to maintain memory state across interactions.
The Titan Memory MCP server provides the following tools:
help
Get help about available tools.
Parameters:
tool
(optional): Specific tool name to get help forcategory
(optional): Category of tools to exploreshowExamples
(optional): Include usage examplesverbose
(optional): Include detailed descriptionsinit_model
Initialize the Titan Memory model with custom configuration.
Parameters:
inputDim
: Input dimension size (default: 768)hiddenDim
: Hidden dimension size (default: 512)memoryDim
: Memory dimension size (default: 1024)transformerLayers
: Number of transformer layers (default: 6)numHeads
: Number of attention heads (default: 8)ffDimension
: Feed-forward dimension (default: 2048)dropoutRate
: Dropout rate (default: 0.1)maxSequenceLength
: Maximum sequence length (default: 512)memorySlots
: Number of memory slots (default: 5000)similarityThreshold
: Similarity threshold (default: 0.65)surpriseDecay
: Surprise decay rate (default: 0.9)pruningInterval
: Pruning interval (default: 1000)gradientClip
: Gradient clipping value (default: 1.0)forward_pass
Perform a forward pass through the model to get predictions.
Parameters:
x
: Input vector or textmemoryState
(optional): Memory state to usetrain_step
Execute a training step to update the model.
Parameters:
x_t
: Current input vector or textx_next
: Next input vector or textget_memory_state
Get the current memory state and statistics.
Parameters:
type
(optional): Optional memory type filtermanifold_step
Update memory along a manifold direction.
Parameters:
base
: Base memory statevelocity
: Update directionprune_memory
Remove less relevant memories to free up space.
Parameters:
threshold
: Pruning threshold (0-1)save_checkpoint
Save memory state to a file.
Parameters:
path
: Checkpoint file pathload_checkpoint
Load memory state from a file.
Parameters:
path
: Checkpoint file pathreset_gradients
Reset accumulated gradients to recover from training issues.
Parameters: None
The Titan Memory MCP server is designed to work seamlessly with Claude 3.7 Sonnet in Cursor. Here's an example of how to use it:
// Initialize the model
const result = await callTool("init_model", {
inputDim: 768,
memorySlots: 10000,
transformerLayers: 8,
});
// Perform a forward pass
const { predicted, memoryUpdate } = await callTool("forward_pass", {
x: "const x = 5;", // or vector: [0.1, 0.2, ...]
memoryState: currentMemory,
});
// Train the model
const result = await callTool("train_step", {
x_t: "function hello() {",
x_next: " console.log('world');",
});
// Get memory state
const state = await callTool("get_memory_state", {});
The Titan Memory MCP server includes sophisticated memory management to prevent memory leaks and ensure efficient tensor operations:
The Titan Memory MCP server is built with a modular architecture:
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.
No configuration available
This service may require manual configuration, please check the details on the left
Related projects feature coming soon
Will recommend related projects based on sub-categories