An Open Source, Claude Code Like Tool, With RAG + Graph RAG + MCP Integration, and Supports Most LLMs (Incomplete But Functional & Usable)Autonomous LLM agent: ChromaDB semantic RAG + Neo4j knowledge-graph sync + MCP tool-call gateway.
What if AI could not only access memories, but consciously choose what to remember? With MCP tool access fully supported?
AI conversation platform implementing dual-layer memory architecture inspired by human cognition. Combines automatic background memory with conscious, deliberate memory operations that AI controls. Tool access powers similar to Claude Desktop.
Purpose-driven autonomous execution replacing simple query generation with sophisticated multi-step workflows:
Automatic Memory (RAG): Non-volitional background memory using ChromaDB vectors and Google text-embedding-004
Conscious Memory: Volitional operations via MCP tools - save, search, update, delete with tags and importance scoring
Knowledge Graph: Neo4j-powered relationship mapping with automatic synchronization and retry mechanisms
Exposes memory operations as Model Context Protocol tools for natural conversation flow. Clean separation between UI, memory, and AI operations.
git clone https://github.com/esinecan/skynet-agent.git
cd skynet-agent
npm install
cp .env.example .env.local
# Edit .env.local with your API keys
docker-compose up -d # ChromaDB (8000) + Neo4j (7474, 7687)
npm run dev # Or npm run dev:next if Neo4j issues
Access:
http://localhost:3000
http://localhost:3000/conscious-memory
http://localhost:7474
(neo4j/password123)Provider | Best For | Model |
---|---|---|
Multimodal & speed | gemini-2.5-flash-preview-05-20 | |
DeepSeek | Cost-effective | deepseek-chat |
OpenAI | Ecosystem | gpt-4o-mini |
Anthropic | Reasoning | claude-3-5-haiku-20241022 |
Groq | Ultra-fast | llama-3.3-70b-versatile |
Mistral | Natural language | mistral-large-latest |
Ollama | Privacy | llama3.2:latest |
# LLM (pick one)
GOOGLE_API_KEY=your_key
DEEPSEEK_API_KEY=your_key
# Main LLM Configuration
LLM_PROVIDER=google
LLM_MODEL=gemini-2.5-flash-preview-05-20
# Motive Force (Autopilot) LLM Configuration (optional - defaults to main LLM)
LLM_PROVIDER_MOTIVE_FORCE=deepseek
LLM_MODEL_MOTIVE_FORCE=deepseek-chat
# Services
CHROMA_URL=http://localhost:8000
NEO4J_URI=bolt://localhost:7687
NEO4J_PASSWORD=password123
# Autopilot
MOTIVE_FORCE_ENABLED=false
MOTIVE_FORCE_MAX_CONSECUTIVE_TURNS=10
MOTIVE_FORCE_TEMPERATURE=0.8
Enable via UI toggle. Your next message becomes the objective:
Using timestamps and normal querying, organize today's memories into 5-10 groups.
Delete redundant items, consolidate similar ones, add insights. Check with autopilot
periodically. Daily maintenance cultivates curated memory over time.
Configure via gear icon: turn delays, limits, memory integration, aggressiveness modes.
# Development
npm run dev # Full stack + KG sync
npm run dev:debug # With Node debugging
npm run dev:next # Frontend only
npm run dev:kg # KG sync only
# Knowledge Graph
npm run kg:sync # One-time sync
npm run kg:sync:full # Complete resync
npm run kg:sync:queue # Process retry queue
# Testing
npm run test # All tests
npm run test:rag # RAG system
npm run test:neo4j # Neo4j integration
skynet-agent/
โโโ src/
โ โโโ app/ # Next.js routes
โ โโโ components/ # React components
โ โโโ lib/ # Core libraries
โ โ โโโ motive-force-graph.ts # LangGraph workflow
โ โ โโโ conscious-memory.ts # Volitional memory
โ โ โโโ rag.ts # Automatic memory
โ โ โโโ knowledge-graph-*.ts # Neo4j integration
โ โโโ types/ # TypeScript definitions
โโโ docker-compose.yml # Services setup
โโโ motive-force-prompt.md # Autopilot personality
interface Memory {
id: string;
text: string;
embedding: number[]; // Google text-embedding-004
metadata: {
sender: 'user' | 'assistant';
timestamp: string;
summary?: string; // Auto-summarized if over limit
};
}
interface ConsciousMemory {
id: string;
content: string;
tags: string[];
importance: number; // 1-10
source: 'explicit' | 'suggested' | 'derived';
metadata: {
accessCount: number;
lastAccessed: string;
};
}
interface MotiveForceGraphState {
messages: BaseMessage[];
currentPurpose: string;
subgoals: SubGoal[];
executionPlan: ExecutionStep[];
toolResults: ToolResult[];
reflections: Reflection[];
overallProgress: number;
blockers: string[];
needsUserInput: boolean;
}
POST /api/conscious-memory
{
"action": "save|search|update|delete|stats|tags",
"content": "string",
"tags": ["array"],
"importance": 7
}
POST /api/motive-force
{
"action": "generate|generateStreaming|saveConfig|getState",
"sessionId": "string",
"data": {}
}
"Embeddings service unavailable": Falls back to hash-based embeddings. Check Google API key.
"ChromaDB connection failed": Ensure docker-compose up -d
and port 8000 available.
"Neo4j sync errors": Check credentials, run npm run kg:sync:queue
for retries.
"Actually Looks Very Ugly": I suck at UI design.
Inspired by cognitive science:
Technical innovations:
Fork, improve, PR. Areas: memory algorithms, UI/UX, MCP tools, autopilot intelligence, testing, performance.
MIT - Lok Tar Ogar!
ChromaDB, Google AI, Anthropic MCP, Next.js, Neo4j teams. Open source MCP servers and Ollama Vercel AI SDK library.
No configuration available
Related projects feature coming soon
Will recommend related projects based on sub-categories