PY

mcp_client

by theailanguage/mcp_client

0 views

MCP Client Implementation using Python, LangGraph and Gemini

pythonChat & Messaging

๐Ÿš€ MCP Client with Gemini AI

๐Ÿ“ข Subscribe to The AI Language on YouTube!

Welcome! This project features multiple MCP clients integrated with Google Gemini AI to execute tasks via the Model Context Protocol (MCP) โ€” with and without LangChain.

Happy building, and donโ€™t forget to subscribe!

MCP Client Options

This repository includes four MCP client options for various use cases:

OptionClient ScriptLangChainConfig SupportTransportTutorial
1client.pyโŒโŒSTDIOLegacy Client
2langchain_mcp_client.pyโœ…โŒSTDIOLangChain Client
3langchain_mcp_client_wconfig.pyโœ…โœ…STDIOMulti-Server
4client_sse.pyโŒโŒSSE (Loca & Web)SSE Client

If you want to add or reuse MCP Servers, check out the MCP Servers repo.


โœช Features

โœ… Connects to an MCP server (STDIO or SSE)
โœ… Uses Google Gemini AI to interpret user prompts
โœ… Allows Gemini to call MCP tools via server
โœ… Executes tool commands and returns results
โœ… (Upcoming) Maintains context and history for conversations


Running the MCP Client

Choose the appropriate command for your preferred client:

  • Legacy STDIO โ€” uv run client.py path/to/server.py
  • LangChain STDIO โ€” uv run langchain_mcp_client.py path/to/server.py
  • LangChain Multi-Server STDIO โ€” uv run langchain_mcp_client_wconfig.py path/to/config.json
  • SSE Client โ€” uv run client_sse.py sse_server_url

Project Structure

mcp-client-gemini/
โ”œโ”€โ”€ client.py                        # Basic client (STDIO)
โ”œโ”€โ”€ langchain_mcp_client.py         # LangChain + Gemini
โ”œโ”€โ”€ langchain_mcp_client_wconfig.py # LangChain + config.json (multi-server)
โ”œโ”€โ”€ client_sse.py                   # SSE transport client (local or remote)
โ”œโ”€โ”€ .env                            # API key environment file
โ”œโ”€โ”€ README.md                       # Project documentation
โ”œโ”€โ”€ requirements.txt                # Dependency list
โ”œโ”€โ”€ .gitignore                      # Git ignore rules
โ”œโ”€โ”€ LICENSE                         # License information

How It Works

  1. You send a prompt:

    Create a file named test.txt

  2. The prompt is sent to Google Gemini AI.
  3. Gemini uses available MCP tools to determine a response.
  4. The selected tool is executed on the connected server.
  5. The AI returns results and maintains conversation context (if supported).

๐Ÿค Contributing

At this time, this project does not accept external code contributions.

This is to keep licensing simple and avoid any shared copyright.

You're very welcome to: โœ… Report bugs or request features (via GitHub Issues)
โœ… Fork the repo and build your own version
โœ… Suggest documentation improvements

If you'd like to collaborate in another way, feel free to open a discussion!

Install

{
  "mcpServers": {
    "mcpclient": {
      "command": "uv",
      "args": [
        "run",
        "client.py",
        "path/to/server.py"
      ]
    }
  }
}
For more configuration details, refer to the content on the left

Related

Related projects feature coming soon

Will recommend related projects based on sub-categories