Nexent 是一个可以零代码开发智能体的平台 —— 无需编排,无需复杂拖拉拽。基于 MCP 工具生态,Nexent 还提供强大的模型集成、数据处理、知识库管理能力。
Nexent is an open-source agent platform that turns process-level natural language into complete multimodal agents — no diagrams, no wiring. Built on the MCP tool ecosystem, Nexent provides model integration, data processing, knowledge-base management, and zero-code agent development. Our goal is simple: to bring data, models, and tools together in one smart hub, making daily workflows smarter and more connected.
One prompt. Endless reach.
https://github.com/user-attachments/assets/0758629c-3477-4cd4-a737-0aab330d53a7
If you want to go fast, go alone; if you want to go far, go together.
We're still in our very first open-source phase and aiming for Nexent v1-stable in June 2025. Until then we'll keep shipping core features rapidly — and we'd love your help:
Rome wasn't built in a day.
Many of our key capabilities are still under active development, but if our vision speaks to you, jump in via the Contribution Guide and shape Nexent with us.
Early contributors won't go unnoticed: from special badges and swag to other tangible rewards, we're committed to thanking the pioneers who help bring Nexent to life.
Most of all, we need visibility. Star ⭐ and watch the repo, share it with friends, and help more developers discover Nexent — your click brings new hands to the project and keeps the momentum growing.
Resource | Minimum |
---|---|
CPU | 2 cores |
RAM | 6 GiB |
Software | Docker & Docker Compose installed |
git clone https://github.com/ModelEngine-Group/nexent.git
cd nexent/docker
cp .env.example .env # fill only nessasary configs
bash deploy.sh
When the containers are running, open http://localhost:3000 in your browser and follow the setup wizard.
We recommend the following model providers:
Model Type | Provider | Notes |
---|---|---|
LLM & VLLM | Silicon Flow | Free tier available |
LLM & VLLM | Alibaba Bailian | Free tier available |
Embedding | Jina | Free tier available |
TTS & STT | Volcengine Voice | Free for personal use |
Search | EXA | Free tier available |
You'll need to input the following information in the model configuration page:
The following configurations need to be added to your .env
file (we'll make these configurable through the frontend soon):
ℹ️ Due to core features development, currently, we only support Jina Embedding model. Support for other models will be added in future releases. For Jina API key setup, please refer to our FAQ.
Want to build from source or add new features? Check the Contribution Guide for step-by-step instructions.
Prefer to run Nexent from source code? Follow our Developer Guide for detailed setup instructions and customization options.
1
Smart agent prompt generation
Turn plain language into runnable prompts. Nexent automatically chooses the right tools and plans the best action path for every request.
2
Scalable data process engine
Process 20+ data formats with fast OCR and table structure extraction, scaling smoothly from a single process to large-batch pipelines.
3
Personal-grade knowledge base
Import files in real time, auto-summarise them, and let agents access both personal and global knowledge instantly, also knowing what it can get from each knowledge base.
4
Internet knowledge search
Connect to 5+ web search providers so agents can mix fresh internet facts with your private data.
5
Knowledge-level traceability
Serve answers with precise citations from web and knowledge-base sources, making every fact verifiable.
6
Multimodal understanding & dialogue
Speak, type, files, or show images. Nexent understands voice, text, and pictures, and can even generate new images on demand.
7
MCP tool ecosystem
Drop in or build Python plug-ins that follow the MCP spec; swap models, tools, and chains without touching core code.
🔄 Knowledge Base Refresh Delays
We are aware that the knowledge base refresh mechanism currently has some delays. We plan to refactor this part soon, but please note that this is only a task management logic issue - the actual data processing speed is not affected.
🤖 Limited Model Provider Support
We currently have limited support for different model providers, including voice and multimodal models. We will be rapidly updating this support in the coming weeks - stay tuned for updates!
📦 Large Docker Image Size
We are aware that our current Docker image is quite large (around 10GB) as it includes a powerful, extensible data processing engine, algorithms, and models. We will soon release a Lite version to reduce the image size and make deployment faster and lighter.
We welcome all kinds of contributions! Whether you're fixing bugs, adding features, or improving documentation, your help makes Nexent better for everyone.
Join our Discord community to chat with other developers and get help!
Nexent is licensed under the MIT with additional conditions. Please read the LICENSE file for details.
No configuration available
Related projects feature coming soon
Will recommend related projects based on sub-categories