Digimon Engine — Multi-Agent, Multi-Player Framework for AI-Native Games and Agentic Metaverse
🐦 Twitter | 📢 Telegram | 🌐 Website | 📙 Documentation
English | 简体中文 | 繁體中文 | 한국어 | 日本語 | Deutsch | Français | Português | Italiano | Español | Русский | Türkçe | Polski
Digimon Engine is an open-source gaming platform similar to Unreal Engine for AI gaming. It supports social and financial AI Agents, enabling immersive AI-native gameplay. We are preparing to onboard new games featuring AI Agent NPCs. Our aim is to create an AI agent framework to build a Westworld-like environment.
Seamless integration with external clients, LLMs, and AI agents, combining architectures from MCP protocol, DAMN.FUN SDK, and Digimon Engine. This includes building webhooks and new REST API endpoints for external game/agent creation, ownership, and wallet connectivity.
Agents: Each monster/agent has a unique identity and motivations, roaming the world, talking, and forming relationships. In the future, agents would reference prior interactions—extracted from a vector database (Pinecone) of memory embeddings—so every conversation and decision is informed by past encounters (persistence memory).
Game Engine: Orchestration system schedules agent activities, handles “Run Agent Batch” tasks, and manages collisions. Whenever two monsters’ paths are predicted to cross, the engine groups them and triggers a conversation sequence. After tasks wrap up, agents become available again for new scheduling, ensuring continuous world activity without manual intervention.
Event Logs: An append-only record tracks everything—agents’ paths, conversation timestamps, and who spoke to whom. Before starting a new path, monsters consult their event logs to predict future collisions. If they haven’t chatted with an intersecting agent recently, they initiate dialogue. The Event Logs also stores all conversation transcripts and coordinates for accurate context recall and memory embedding.
Memory & Vector Database: After conversations or reflective moments, agents summarize their experiences and store them as vector embeddings (mxbai-embed-large). These embeddings can be retrieved later and filtered for relevance, injecting past context directly into the prompt for the next conversation.
One of the core challenges in game engine design is keeping latency low while scaling to more players and agents. That’s why DAMN introduces a compressed state (HistoryObject) to efficiently track and replay movement. Each engine tick (~60/sec) logs numeric fields (like position), then at the end of each step (1/sec) we store a compressed “history buffer.” The client fetches both current values and this replay-able buffer, rendering smooth animations without any jumps. Impact: for players and agents, this design delivers fluid gameplay—no stutters or choppy animations. Behind the scenes, it’s a streamlined approach that keeps performance high, stays reliable, and scales seamlessly for more AI-driven characters.
Instead of relying on existing game engine (ex: Unity or Godot), DAMN uses a custom AI-native game engine built from scratch (written in Typescript). AI agents and human players are treated identically—no second-class NPCs. Every tick, the engine updates the entire world in memory, giving AI the same power to move, interact, and engage as humans. This leads to more organic, dynamic worlds where AI isn’t just following scripts but genuinely participating in the gameplay.
Design Overview:
Further details can be found in the Architecture Overview.
No configuration available
Related projects feature coming soon
Will recommend related projects based on sub-categories