A GUI Agent application based on UI-TARS(Vision-Language Model) that allows you to control your computer using natural language.
[2025-06-25] We released a technical preview version of a CLI - Introducing Agent TARS Beta, a multimodal AI agent that aims to explore a work form that is closer to human-like task completion through rich multimodal capabilities (such as GUI Agent, Vision) and seamless integration with various real-world tools.
UI-TARS Desktop is a GUI Agent application based on UI-TARS (Vision-Language Model) that allows you to control your computer using natural language.
📑 Paper
| 🤗 Hugging Face Models
| 🫨 Discord
| 🤖 ModelScope
🖥️ Desktop Application
| 👓 Midscene (use in browser)
|
Instruction | Local Operator | Remote Operator |
---|---|---|
Please help me open the autosave feature of VS Code and delay AutoSave operations for 500 milliseconds in the VS Code setting. | ||
Could you help me check the latest open issue of the UI-TARS-Desktop project on GitHub? |
See Quick Start.
See Deployment.
See CONTRIBUTING.md.
See @ui-tars/sdk
UI-TARS Desktop is licensed under the Apache License 2.0.
If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil:
@article{qin2025ui,
title={UI-TARS: Pioneering Automated GUI Interaction with Native Agents},
author={Qin, Yujia and Ye, Yining and Fang, Junjie and Wang, Haoming and Liang, Shihao and Tian, Shizuo and Zhang, Junda and Li, Jiahao and Li, Yunxin and Huang, Shijue and others},
journal={arXiv preprint arXiv:2501.12326},
year={2025}
}
No configuration available
Related projects feature coming soon
Will recommend related projects based on sub-categories