Minimal Linux OS with a Model Context Protocol (MCP) gateway to expose local capabilities to LLMs.
llmbasedos
is not just a framework or set of plugins. It is a cognitive operating system designed to transform your computer from a passive executor into an autonomous partner โ capable of perceiving, reasoning, and acting across both local and cloud contexts.
It does this by exposing all system capabilities (files, mail, APIs, agents) to any intelligent model โ LLM or not โ via the Model Context Protocol (MCP): a simple, powerful JSON-RPC layer running over UNIX sockets and WebSockets.
The vision is to make personal agentivity real โ by empowering AI agents to perform meaningful tasks on your behalf with minimal plumbing, friction, or boilerplate.
mcp.llm.chat
requests via your preferred backend."The true power of AI is not in the model, but in its ability to act contextually."
Where most projects focus on โthe agent,โ llmbasedos focuses on the substrate: a runtime and interface that lets agents โ whether LLM-driven or human-written โ perform intelligent tasks, access context, and automate real workflows.
Just like Unix abstracted away hardware with file descriptors, llmbasedos abstracts cognitive capabilities with the MCP.
supervisord
managing microservices.luca-shell
, a REPL for exploring and scripting against your MCP system.Old approach: YAML workflows (rigid, hard to debug, logic hell).
New approach: Python scripts using mcp_call()
for everything.
Example:
history = json.loads(mcp_call("mcp.fs.read", ["/outreach/contact_history.json"]).get("result", {}).get("content", "[]"))
prompt = f"Find 5 new agencies not in: {json.dumps(history)}"
llm_response = mcp_call("mcp.llm.chat", [[{"role": "user", "content": prompt}], {"model": "gemini-1.5-pro"}])
new_prospects = json.loads(llm_response.get("result", {}).get("choices", [{}])[0].get("message", {}).get("content", "[]"))
if new_prospects:
updated = history + new_prospects
mcp_call("mcp.fs.write", ["/outreach/contact_history.json", json.dumps(updated, indent=2), "text"])
Thatโs it. You just built an LLM-powered outreach agent with 3 calls and zero boilerplate.
gateway/
(FastAPI):
lic.key
, licence_tiers.yaml
).servers/fs/
: virtualized file system + FAISS semantic search.
servers/mail/
: IMAP email parsing + draft handling.
servers/sync/
: rclone for file sync ops.
servers/agent/
: (legacy) YAML workflow engine (to be deprecated).
llmbasedos_src/
..env
, lic.key
, mail_accounts.yaml
, and user files.docker compose build
docker compose up
luca-shell
and start issuing mcp.*
calls.Next milestone: orchestrator_server
It listens to natural language intentions ("Find 5 leads & draft intro emails"), auto-generates Python scripts to execute the plan, then optionally runs them.
โ the OS becomes a compiler for intention.
/mnt/user_data
).env
-only secretsStars, forks, PRs and radical experiments welcome.
llmbasedos is what happens when you stop asking "how can I call GPT" and start asking "what if GPT could call everything else?"
No configuration available
Related projects feature coming soon
Will recommend related projects based on sub-categories