TS

chatgpt-copilot

by feiskyer/chatgpt-copilot

0 views

ChatGPT Copilot Extension for Visual Studio Code

typescriptapiCode Generation


An VS Code ChatGPT Copilot Extension

The Most Loved Open-Source ChatGPT Extension for VS Code

ChatGPT Copilot is a powerful and telemetry-free extension for Visual Studio Code, bringing the capabilities of ChatGPT directly into your coding environment.

Features

  • 🤖 Supports GPT-4, o1, Claude, Gemini, Ollama, Github and other OpenAI-compatible local models with your API key from OpenAI, Azure OpenAI Service, Google, Anthropic or other providers.
  • 💥 Model Context Protocol (MCP) to bring your own tools and DeepClaude (DeepSeek R1 + Claude) mode for best AI responses.
  • 📂 Chat with your Files: Add multiple files and images to your chat using @ for seamless collaboration.
  • 📃 Streaming Answers: Receive real-time responses to your prompts in the sidebar conversation window.
  • 📖 Prompt Manager: Chat with your own prompts (use # to search).
  • 🔥 Tool calls via prompt parsing for models that don't support native tool calling.
  • 📝 Code Assistance: Create files or fix your code with one click or keyboard shortcuts.
  • ➡️ Export Conversations: Export all your conversation history at once in Markdown format.
  • 📰 Custom Prompt Prefixes: Customize what you are asking ChatGPT with ad-hoc prompt prefixes.
  • 💻 Seamless Code Integration: Copy, insert, or create new files directly from ChatGPT's code suggestions.
  • ➕ Editable Prompts: Edit and resend previous prompts.
  • 🛡️ Telemetry Free: No usage data is collected.

Recent Release Highlights

  • v4.9: Add prompt based tool calls for models that don't support native tool calling.
  • v4.8: New LOGO and new models.
  • v4.7: Added Model Context Protocol (MCP) integration.
  • v4.6: Added prompt manager, DeepClaude mode (DeepSeek + Claude) mode, Github Copilot provider and chat with files.

Installation

  • Install the extension from the Visual Studio Marketplace or search ChatGPT Copilot in VScode Extensions and click install.
  • Reload Visual Studio Code after installation.

Supported Models & Providers

AI Providers

The extension supports major AI providers with hundreds of models:

ProviderModelsSpecial Features
OpenAIGPT-4o, GPT-4, GPT-3.5-turbo, o1, o3, o4-miniReasoning models, function calling
AnthropicClaude Sonnet 4, Claude 3.5 Sonnet, Claude Opus 4Advanced reasoning, large context
GoogleGemini 2.5 Pro, Gemini 2.0 Flash, Gemini ProSearch grounding, multimodal
GitHub CopilotGPT-4o, Claude Sonnet 4, o3-mini, Gemini 2.5 ProBuilt-in VS Code authentication
DeepSeekDeepSeek R1, DeepSeek ReasonerAdvanced reasoning capabilities
Azure OpenAIGPT-4o, GPT-4, o1Enterprise-grade security
Azure AIVarious non-OpenAI modelsMicrosoft's AI model hub
OllamaLlama, Qwen, CodeLlama, MistralLocal model execution
GroqLlama, Mixtral, GemmaUltra-fast inference
PerplexityLlama, Mistral modelsWeb-enhanced responses
xAIGrok modelsReal-time information
MistralMistral Large, CodestralCode-specialized models
TogetherVarious open-source modelsCommunity models
OpenRouter200+ modelsAccess to multiple providers

AI Services

Configure the extension by setting your API keys and preferences in the settings.

ConfigurationDescription
API KeyRequired, get from OpenAI, Azure OpenAI, Anthropic or other AI services
API Base URLOptional, default to "https://api.openai.com/v1"
ModelOptional, default to "gpt-4o"

Refer to the following sections for more details on configuring various AI services.

OpenAI

Special notes for ChatGPT users: OpenAI API is billed separately from ChatGPT App. You need to add credits to your OpenAI for API usage here. Once you add credits to your API, create a new api key and it should work.

ConfigurationExample
API Keyyour-api-key
Modelgpt-4o
API Base URLhttps://api.openai.com/v1 (Optional)
Ollama

Pull your image first from Ollama library and then setup the base URL and custom model.

ConfigurationExample
API Keyollama (Optional)
Modelcustom
Custom Modelqwen2.5
API Base URLhttp://localhost:11434/v1/
DeepSeek

Ollama provider:

ConfigurationExample
API Keyollama (Optional)
Modelcustom
Custom Modeldeepseek-r1
API Base URLhttp://localhost:11434/v1/

DeepSeek provider:

ConfigurationExample
API Keyyour-deepseek-key
Modeldeepseek-reasoner
API Base URLhttps://api.deepseek.com

SiliconFlow (SiliconCloud) provider:

ConfigurationExample
API Keyyour-siliconflow-key
Modelcustom
Custom Modeldeepseek-ai/DeepSeek-R1
API Base URLhttps://api.siliconflow.cn/v1

Azure AI Foundry provider:

ConfigurationExample
API Keyyour-azure-ai-key
ModelDeepSeek-R1
API Base URLhttps://[endpoint-name].[region].models.ai.azure.com
Anthropic Claude
ConfigurationExample
API Keyyour-api-key
Modelclaude-3-sonnet-20240229
API Base URLhttps://api.anthropic.com/v1 (Optional)
Google Gemini
ConfigurationExample
API Keyyour-api-key
Modelgemini-2.0-flash-thinking-exp-1219
API Base URLhttps://generativelanguage.googleapis.com/v1beta (Optional)
Azure OpenAI

For Azure OpenAI Service, apiBaseUrl should be set to format https://[YOUR-ENDPOINT-NAME].openai.azure.com/openai/deployments/[YOUR-DEPLOYMENT-NAME].

ConfigurationExample
API Keyyour-api-key
Modelgpt-4o
API Base URLhttps://endpoint-name.openai.azure.com/openai/deployments/deployment-name
Github Copilot

Github Copilot is supported with built-in authentication (a popup will ask your permission when using Github Copilot models).

Supported Models:

  • OpenAI Models: gpt-3.5-turbo, gpt-4, gpt-4-turbo, gpt-4o, gpt-4o-mini, gpt-4.1, gpt-4.5
  • Reasoning Models: o1-ga, o3-mini, o3, o4-mini
  • Claude Models: claude-3.5-sonnet, claude-3.7-sonnet, claude-3.7-sonnet-thought, claude-sonnet-4, claude-opus-4
  • Gemini Models: gemini-2.0-flash, gemini-2.5-pro
ConfigurationExample
ProviderGitHubCopilot
API Keygithub
Modelcustom
Custom Modelclaude-sonnet-4
Github Models

For Github Models, get your Github token from here.

ConfigurationExample
API Keyyour-github-token
Modelo1
API Base URLhttps://models.inference.ai.azure.com
OpenAI compatible Models

To use OpenAI compatible APIs, you need to set a custom model name: set model to "custom" and then specify your custom model name.

Example for groq:

ConfigurationExample
API Keyyour-groq-key
Modelcustom
Custom Modelmixtral-8x7b-32768
API Base URLhttps://api.groq.com/openai/v1
DeepClaude (DeepSeek + Claude)
ConfigurationExample
API Keyyour-api-key
Modelclaude-3-sonnet-20240229
API Base URLhttps://api.anthropic.com/v1 (Optional)
Reasoning API Keyyour-deepseek-api-key
Reasoning Modeldeepseek-reasoner (or deepseek-r1 regarding to your provider)
Reasoning API Base URLhttps://api.deepseek.com (or your own base URL)

Commands & Keyboard Shortcuts

The extension provides various commands accessible through the Command Palette (Ctrl+Shift+P / Cmd+Shift+P) and keyboard shortcuts.

Context Menu Commands

Context Menu Commands (Right-click on selected code)

CommandKeyboard ShortcutDescription
Generate CodeCtrl+Shift+A / Cmd+Shift+AGenerate code based on comments or requirements
Add TestsCtrl+K Ctrl+Shift+1 / Cmd+K Cmd+Shift+1Generate unit tests for selected code
Find ProblemsCtrl+K Ctrl+Shift+2 / Cmd+K Cmd+Shift+2Analyze code for bugs and issues
OptimizeCtrl+K Ctrl+Shift+3 / Cmd+K Cmd+Shift+3Optimize and improve selected code
ExplainCtrl+K Ctrl+Shift+4 / Cmd+K Cmd+Shift+4Explain how the selected code works
Add CommentsCtrl+K Ctrl+Shift+5 / Cmd+K Cmd+Shift+5Add documentation comments to code
Complete CodeCtrl+K Ctrl+Shift+6 / Cmd+K Cmd+Shift+6Complete partial or incomplete code
Ad-hoc PromptCtrl+K Ctrl+Shift+7 / Cmd+K Cmd+Shift+7Use custom prompt with selected code
Custom Prompt 1Ctrl+K Ctrl+Shift+8 / Cmd+K Cmd+Shift+8Apply your first custom prompt
Custom Prompt 2Ctrl+K Ctrl+Shift+9 / Cmd+K Cmd+Shift+9Apply your second custom prompt
General Commands

General Commands

CommandDescription
ChatGPT: Ask anythingOpen input box to ask any question
ChatGPT: Reset sessionClear current conversation and start fresh
ChatGPT: Clear conversationClear the conversation history
ChatGPT: Export conversationExport chat history to Markdown file
ChatGPT: Manage PromptsOpen prompt management interface
ChatGPT: Toggle Prompt ManagerShow/hide the prompt manager panel
Add Current File to Chat ContextAdd the currently open file to chat context
ChatGPT: Open MCP ServersManage Model Context Protocol servers
Prompt Management

Prompt Management

  • Use # followed by prompt name to search and apply saved prompts
  • Use @ to add files to your conversation context
  • Access the Prompt Manager through the sidebar for full prompt management

Model Context Protocol (MCP)

The extension supports the Model Context Protocol (MCP), allowing you to extend AI capabilities with custom tools and integrations.

What is MCP?

What is MCP?

MCP enables AI models to securely connect to external data sources and tools, providing:

  • Custom Tools: Integrate your own tools and APIs
  • Data Sources: Connect to databases, file systems, APIs, and more
  • Secure Execution: Sandboxed tool execution environment
  • Multi-Step Workflows: Agent-like behavior with tool chaining

MCP Server Types

The extension supports three types of MCP servers:

TypeDescriptionUse Case
stdioStandard input/output communicationLocal command-line tools and scripts
sseServer-Sent Events over HTTPWeb-based tools and APIs
streamable-httpHTTP streaming communicationReal-time data sources
How to configure MCP?

MCP Configuration

  1. Access MCP Manager: Use ChatGPT: Open MCP Servers command or click the MCP icon in the sidebar
  2. Add MCP Server: Configure your MCP servers with:
    • Name: Unique identifier for the server
    • Type: Choose from stdio, sse, or streamable-http
    • Command/URL: Executable path or HTTP endpoint
    • Arguments: Command-line arguments (for stdio)
    • Environment Variables: Custom environment settings
    • Headers: HTTP headers (for sse/streamable-http)

Example MCP Configurations

File System Access (stdio):

{
  "name": "filesystem",
  "type": "stdio",
  "command": "npx",
  "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/directory"],
  "isEnabled": true
}

Web Search (sse):

{
  "name": "web-search",
  "type": "sse",
  "url": "https://api.example.com/mcp/search",
  "headers": {"Authorization": "Bearer your-token"},
  "isEnabled": true
}
Agent Mode

Agent Mode

When MCP servers are enabled, the extension operates in Agent Mode:

  • Max Steps: Configure up to 15 tool execution steps
  • Tool Chaining: Automatic multi-step workflows
  • Error Handling: Robust error recovery and retry logic
  • Progress Tracking: Real-time tool execution feedback

Configurations

Full list of configuration options

Core Configuration

SettingDefaultDescription
chatgpt.gpt3.providerAutoAI Provider: Auto, OpenAI, Azure, AzureAI, Anthropic, GitHubCopilot, Google, Mistral, xAI, Together, DeepSeek, Groq, Perplexity, OpenRouter, Ollama
chatgpt.gpt3.apiKeyAPI key for your chosen provider
chatgpt.gpt3.apiBaseUrlhttps://api.openai.com/v1API base URL for your provider
chatgpt.gpt3.modelgpt-4oModel to use for conversations
chatgpt.gpt3.customModelCustom model name when using custom model option
chatgpt.gpt3.organizationOrganization ID (OpenAI only)

Model Parameters

SettingDefaultDescription
chatgpt.gpt3.maxTokens0 (unlimited)Maximum tokens to generate in completion
chatgpt.gpt3.temperature1Sampling temperature (0-2). Higher = more creative
chatgpt.gpt3.top_p1Nucleus sampling parameter (0-1)
chatgpt.systemPromptSystem prompt for the AI assistant

DeepClaude (Reasoning + Chat) Configuration

SettingDefaultDescription
chatgpt.gpt3.reasoning.providerAutoProvider for reasoning model (Auto, OpenAI, Azure, AzureAI, Google, DeepSeek, Groq, OpenRouter, Ollama)
chatgpt.gpt3.reasoning.apiKeyAPI key for reasoning model
chatgpt.gpt3.reasoning.apiBaseUrlhttps://api.openai.com/v1API base URL for reasoning model
chatgpt.gpt3.reasoning.modelModel to use for reasoning (e.g., deepseek-reasoner, o1)
chatgpt.gpt3.reasoning.organizationOrganization ID for reasoning model (OpenAI only)

Agent & MCP Configuration

SettingDefaultDescription
chatgpt.gpt3.maxSteps15Maximum steps for agent mode when using MCP servers

Feature Toggles

SettingDefaultDescription
chatgpt.gpt3.generateCode-enabledtrueEnable code generation context menu
chatgpt.gpt3.searchGrounding.enabledfalseEnable search grounding (Gemini models only)
chatgpt.gpt3.responsesAPI.enabledfalseEnable OpenAI Responses API. Only available for OpenAI/AzureOpenAI models

Prompt Prefixes & Context Menu

SettingDefaultDescription
chatgpt.promptPrefix.addTestsImplement tests for the following codePrompt for generating unit tests
chatgpt.promptPrefix.addTests-enabledtrueEnable "Add Tests" context menu item
chatgpt.promptPrefix.findProblemsFind problems with the following codePrompt for finding bugs and issues
chatgpt.promptPrefix.findProblems-enabledtrueEnable "Find Problems" context menu item
chatgpt.promptPrefix.optimizeOptimize the following codePrompt for code optimization
chatgpt.promptPrefix.optimize-enabledtrueEnable "Optimize" context menu item
chatgpt.promptPrefix.explainExplain the following codePrompt for code explanation
chatgpt.promptPrefix.explain-enabledtrueEnable "Explain" context menu item
chatgpt.promptPrefix.addCommentsAdd comments for the following codePrompt for adding documentation
chatgpt.promptPrefix.addComments-enabledtrueEnable "Add Comments" context menu item
chatgpt.promptPrefix.completeCodeComplete the following codePrompt for code completion
chatgpt.promptPrefix.completeCode-enabledtrueEnable "Complete Code" context menu item
chatgpt.promptPrefix.adhoc-enabledtrueEnable "Ad-hoc Prompt" context menu item
chatgpt.promptPrefix.customPrompt1Your first custom prompt template
chatgpt.promptPrefix.customPrompt1-enabledfalseEnable first custom prompt in context menu
chatgpt.promptPrefix.customPrompt2Your second custom prompt template
chatgpt.promptPrefix.customPrompt2-enabledfalseEnable second custom prompt in context menu

User Interface

SettingDefaultDescription
chatgpt.response.showNotificationfalseShow notification when AI responds
chatgpt.response.autoScrolltrueAuto-scroll to bottom when new content is added

How to install locally

Build and install locally

We highly recommend installing the extension directly from the VS Code Marketplace for the easiest setup and automatic updates. However, for advanced users, building and installing locally is also an option.

  • Install vsce if you don't have it on your machine (The Visual Studio Code Extension Manager)
    • npm install --global vsce
  • Run vsce package
  • Follow the instructions and install manually.
npm run build
npm run package
code --uninstall-extension feiskyer.chatgpt-copilot
code --install-extension chatgpt-copilot-*.vsix

Acknowledgement

Acknowledgements

Inspired by gencay/vscode-chatgpt project and made effortlessly accessible thanks to the intuitive client provided by the Vercel AI Toolkit, this extension continues the open-source legacy, bringing seamless and robust AI functionalities directly into the editor with telemetry free.

License

This project is released under ISC License - See LICENSE for details. Copyright notice and the respective permission notices must appear in all copies.

Install

{
  "mcpServers": {
    "chatgpt-copilot": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-filesystem",
        "/path/to/directory"
      ]
    }
  }
}
For more configuration details, refer to the content on the left

Related

Related projects feature coming soon

Will recommend related projects based on sub-categories