Python CLI for AI Chat with MCP support
console-chat-gpt v6
Your Ultimate CLI Companion for Chatting with AI Models
Enjoy seamless interactions with OpenAI, MistralAI, Anthropic, xAI, Google AI, DeepSeek, Alibaba, Inception or Ollama-hosted models directly from your command line.
Elevate your chat experience with efficiency and ease.
DISCLAIMER: The intention and implementation of this code are entirely unconnected and unrelated to OpenAI, MistralAI, Anthropic, xAI, Google AI, DeepSeek, Alibaba, Inception, or any other related parties. There is no affiliation or relationship with OpenAI, MistralAI, Anthropic, xAI, Google, DeepSeek, Alibaba, Inception or their subsidiaries in any form.
model_name
and base_url
to the config.toml
file. :new:claude_desktop_config.json
to the root directory and rename to mcp_config.json
to start using with any model! :star:settings
menuconfig.toml
file for complete control over
how the app works. Also supported in-app via the settings
command.Overall, this app focuses on providing a user-friendly and customizable experience with features that enhance personalization, control, and convenience.
The script works fine on Linux and MacOS terminals. For Windows it's recommended to use WSL.
Clone the repository:
git clone https://github.com/amidabuddha/console-chat-gpt.git
Go inside the folder:
cd console-chat-gpt
Install the necessary dependencies:
python3 -m pip install -r requirements.txt
Get your API key from OpenAI, MistralAI, Anthropic, xAI, Google AI Studio, DeepSeek, Alibaba, Inception depending on your selected LLM.
The config.toml.sample
will be automatically copied into config.toml
upon first run, with a prompt to enter your API key/s. Feel free to change any of the other defaults that are not available in the settings
in-app menu as per your needs.
Run the executable:
python3 main.py
Pro-tip: Create an alias for the executable to run from anywhere.
Use the help
command within the chat to check the available options.
Enjoy
config.toml
[chat.defaults] | Main properties to generate a chat completion/response. |
---|---|
temperature | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 more focused. May be set for each new chat session if adjust_temperature in [chat.features] is true. |
system_role | A system (or developer) message inserted into the model's context. Should be one of the listed in [chat.roles] section. May be set for each new chat session if role_selector in [chat.features] is true. |
model | Model ID used to generate the chat completion/response, like gpt-4o or o3. Should be listed in the [chat.models] section, with relevant parameters. May be set for each new chat session if model_selector in [chat.features] is true. |
[chat.features] | Configurable options of the chat application. Some are accessible from within a chat session via the settings command. |
---|---|
model_selector | A selection list of models available in section [chat.models] of config.toml . When true this list may be modified at the beginning of each new chat session. |
adjust_temperature | Prompt to change the temperature for each chat session. When true the temperature value may be modified at the beginning of each new chat session. |
role_selector | A selection list of roles available in section [chat.roles] of config.toml . When true this list may be modified at the beginning of each new chat session. |
save_chat_on_exit | When true automatically save the chat session upon using the exit command in chat. |
continue_chat | When true offers a list of previously saved chat sessions to be continued in a new session. The list may be modified from within a chat session via the chats command. |
Application logging - not yet implemented. | |
disable_intro_help_message | All chat commands available in help are printed upon chat initialization. This is targeted at new users and may be disabled by setting to false. |
assistant_mode | Enable Open AI Assistants API as an available selection upon chat initialization. |
ai_managed | Enable AI Managed mode to allow a model to automatically select the best model according to your prompt. Detailed settings below. |
streaming | If set to true, the model response data will be streamed to the client. |
mcp_client | Setting to false will prevent the default initialization of MCP servers for each chat if not needed. |
[chat.managed] | Settings dedicated to the AI Managed mode. Not available to be edited from within a chat session. |
---|---|
assistant | The preferred model that will evaluate your prompt and select the best available model out of the four configured below to handle it. Should be listed in the [chat.models] section, with relevant parameters. |
assistant_role | Custom instruction to the evaluation model. Change this only if you know exactly what you are doing! |
assistant_generalist | Your preferred general purpose model, typically the one you use the most for any type of queries. Should be listed in the [chat.models] section, with relevant parameters. |
assistant_fast | When speed is preferred to accuracy. Should be listed in the [chat.models] section, with relevant parameters. |
assistant_thinker | A reasoning model for complex tasks. Should be listed in the [chat.models] section, with relevant parameters. |
assistant_coder | Your preferred model to handle Coding and Math questions. Should be listed in the [chat.models] section, with relevant parameters. |
prompt | When AI Managed mode is used frequently the Y/N prompt may be disabled by changing this to false. |
Prompt example:
Markdown visualization example:
Settings and help:
You can find more examples on our Examples page.
Contributions are welcome! If you find any bugs, have feature requests, or want to contribute improvements, please open an issue or submit a pull request.
No configuration available
Related projects feature coming soon
Will recommend related projects based on sub-categories