Best of the MCP
VictoriaMetrics MCP Server

VictoriaMetrics MCP Server

by VictoriaMetrics-Community/mcp-victoriametrics

1 views
Created
1 months ago
Stars
31
Language
Go
License
Apache License 2.0

The implementation of Model Context Protocol (MCP) server for VictoriaMetrics

System Monitoring

VictoriaMetrics MCP Server

Latest Release smithery badge License Slack X Reddit

The implementation of Model Context Protocol (MCP) server for VictoriaMetrics.

This provides access to your VictoriaMetrics instance and seamless integration with VictoriaMetrics APIs and documentation. It can give you a comprehensive interface for monitoring, observability, and debugging tasks related to your VictoriaMetrics instances, enable advanced automation and interaction capabilities for engineers and tools.

Features

This MCP server allows you to use almost all read-only APIs of VictoriaMetrics, i.e. all functions available in VMUI:

  • Querying metrics and exploring data (even drawing graphs if your client supports it)
  • Listing and exporting available metrics, labels, labels values and entire series
  • Analyzing and testing your alerting and recording rules and alerts
  • Showing parameters of your VictoriaMetrics instance
  • Exploring cardinality of your data and metrics usage statistics
  • Analyzing, tracing, prettifying and explaining your queries
  • Debugging your relabeling rules, downsampling and retention policy configurations
  • Integration with VictoriaMetrics Cloud

In addition, the MCP server contains embedded up-to-date documentation and is able to search it without online access.

More details about the exact available tools and prompts can be found in the Usage section.

You can combine functionality of tools, docs search in your prompts and invent great usage scenarios for your VictoriaMetrics instance. Just check the Dialog example section to see how it can work. And please note the fact that the quality of the MCP Server and its responses depends very much on the capabilities of your client and the quality of the model you are using.

You can also combine the MCP server with other observability or doc search related MCP Servers and get even more powerful results.

Try without installation

There is a publicly available instance of the VictoriaMetrics MCP Server that you can use to test the features without installing it:

https://play-mcp.victoriametrics.com/mcp

It's available in Streamable HTTP mode and configured to work with Public VictoriaMetrics Playground.

Here is example of configuration for Claude Desktop:

image

Requirements

Installation

Go

go install github.com/VictoriaMetrics-Community/mcp-victoriametrics/cmd/mcp-victoriametrics@latest

Binaries

Just download the latest release from Releases page and put it to your PATH.

Example for Linux x86_64 (note that other architectures and platforms are also available):

latest=$(curl -s https://api.github.com/repos/VictoriaMetrics-Community/mcp-victoriametrics/releases/latest | grep 'tag_name' | cut -d\" -f4)
wget https://github.com/VictoriaMetrics-Community/mcp-victoriametrics/releases/download/$latest/mcp-victoriametrics_Linux_x86_64.tar.gz
tar axvf mcp-victoriametrics_Linux_x86_64.tar.gz

Docker

You can run VictoriaMetrics MCP Server using Docker.

This is the easiest way to get started without needing to install Go or build from source.

docker run -d --name mcp-victoriametrics \
  -e MCP_SERVER_MODE=sse \
  -e VM_INSTANCE_ENTRYPOINT=https://play.victoriametrics.com \
  -e VM_INSTANCE_TYPE=cluster \
  ghcr.io/victoriametrics-community/mcp-victoriametrics

You should replace environment variables with your own parameters.

Note that the MCP_SERVER_MODE=sse flag is used to enable Server-Sent Events mode, which used by MCP clients to connect. Alternatively, you can use MCP_SERVER_MODE=http to enable Streamable HTTP mode. More details about server modes can be found in the Configuration section.

See available docker images in github registry.

Also see Using Docker instead of binary section for more details about using Docker with MCP server with clients in stdio mode.

Source Code

For building binary from source code you can use the following approach:

  • Clone repo:

    git clone https://github.com/VictoriaMetrics-Community/mcp-victoriametrics.git
    cd mcp-victoriametrics
    
  • Build binary from cloned source code:

    make build
    # after that you can find binary mcp-victoriametrics and copy this file to your PATH or run inplace
    
  • Build image from cloned source code:

    docker build -t mcp-victoriametrics .
    # after that you can use docker image mcp-victoriametrics for running or pushing
    

Smithery

To install VictoriaMetrics MCP Server for your client automatically via Smithery, yo can use the following commands:

# Get the list of supported MCP clients
npx -y @smithery/cli list clients
#Available clients:
#  claude
#  cline
#  windsurf
#  roocode
#  witsy
#  enconvo
#  cursor
#  vscode
#  vscode-insiders
#  boltai
#  amazon-bedrock

# Install VictoriaMetrics MCP server for your client
npx -y @smithery/cli install @VictoriaMetrics-Community/mcp-victoriametrics --client <YOUR-CLIENT-NAME>
# and follow the instructions

Configuration

MCP Server for VictoriaMetrics is configured via environment variables:

VariableDescriptionRequiredDefaultAllowed values
VM_INSTANCE_ENTRYPOINT / VMC_API_KEYURL to VictoriaMetrics instanceYes (if you don't use VMC_API_KEY)--
VM_INSTANCE_TYPEType of VictoriaMetrics instanceYes (if you don't use VMC_API_KEY)-single, cluster
VM_INSTANCE_BEARER_TOKENAuthentication token for VictoriaMetrics APINo--
VMC_API_KEYAPI key from VictoriaMetrics Cloud ConsoleNo--
MCP_SERVER_MODEServer operation mode. See Modes for details.Nostdiostdio, sse, http
MCP_LISTEN_ADDRAddress for SSE or HTTP server to listen onNolocalhost:8080-
MCP_DISABLED_TOOLSComma-separated list of tools to disableNo--

You can use two options to connect to your VictoriaMetrics instance:

  • Using VM_INSTANCE_ENTRYPOINT + VM_INSTANCE_TYPE + VM_INSTANCE_BEARER_TOKEN (optional) environment variables to connect to any single-node or cluster instance of VictoriaMetrics.
  • Using VMC_API_KEY environment variable to work with your VictoriaMetrics Cloud instances.

Modes

MCP Server supports the following modes of operation (transports):

  • stdio - Standard input/output mode, where the server reads commands from standard input and writes responses to standard output. This is the default mode and is suitable for local servers.
  • sse - Server-Sent Events. Server will expose the /sse and /message endpoints for SSE connections.
  • http - Streamable HTTP. Server will expose the /mcp endpoint for HTTP connections.

More info about traqnsports you can find in MCP docs:

Сonfiguration examples

# For a single-node instance
export VM_INSTANCE_ENTRYPOINT="http://localhost:8428"
export VM_INSTANCE_TYPE="single"
export VM_INSTANCE_BEARER_TOKEN="your-token"

# For a cluster
export VM_INSTANCE_ENTRYPOINT="https://play.victoriametrics.com"
export VM_INSTANCE_TYPE="cluster"
export MCP_DISABLED_TOOLS="export,metric_statistics,test_rules" # disable export, statistics and rules unit test tools

# For VictoriaMetrics Cloud
export VMC_API_KEY="<you-api-key>"

# Server mode
export MCP_SERVER_MODE="sse"
export MCP_LISTEN_ADDR="0.0.0.0:8080"

Endpoints

In SSE and HTTP modes the MCP server provides the following endpoints:

EndpointDescription
/sse + /messageEndpoints for messages in SSE mode (for MCP clients that support SSE)
/mcpHTTP endpoint for streaming messages in HTTP mode (for MCP clients that support Streamable HTTP)
/metricsMetrics in Prometheus format for monitoring the MCP server
/health/livenessLiveness check endpoint to ensure the server is running
/health/readinessReadiness check endpoint to ensure the server is ready to accept requests

Setup in clients

Cursor

Go to: Settings -> Cursor Settings -> MCP -> Add new global MCP server and paste the following configuration into your Cursor ~/.cursor/mcp.json file:

{
  "mcpServers": {
    "victoriametrics": {
      "command": "/path/to/mcp-victoriametrics",
      "env": {
        "VM_INSTANCE_ENTRYPOINT": "<YOUR_VM_INSTANCE>",
        "VM_INSTANCE_TYPE": "<YOUR_VM_INSTANCE_TYPE>",
        "VM_INSTANCE_BEARER_TOKEN": "<YOUR_VM_BEARER_TOKEN>"
      }
    }
  }
}

See Cursor MCP docs for more info.

Claude Desktop

Add this to your Claude Desktop claude_desktop_config.json file (you can find it if open Settings -> Developer -> Edit config):

{
  "mcpServers": {
    "victoriametrics": {
      "command": "/path/to/mcp-victoriametrics",
      "env": {
        "VM_INSTANCE_ENTRYPOINT": "<YOUR_VM_INSTANCE>",
        "VM_INSTANCE_TYPE": "<YOUR_VM_INSTANCE_TYPE>",
        "VM_INSTANCE_BEARER_TOKEN": "<YOUR_VM_BEARER_TOKEN>"
      }
    }
  }
}

See Claude Desktop MCP docs for more info.

Claude Code

Run the command:

claude mcp add victoriametrics -- /path/to/mcp-victoriametrics \
  -e VM_INSTANCE_ENTRYPOINT=<YOUR_VM_INSTANCE> \
  -e VM_INSTANCE_TYPE=<YOUR_VM_INSTANCE_TYPE>
  -e VM_INSTANCE_BEARER_TOKEN=<YOUR_VM_BEARER_TOKEN>

See Claude Code MCP docs for more info.

Visual Studio Code

Add this to your VS Code MCP config file:

{
  "servers": {
    "victoriametrics": {
      "type": "stdio",
      "command": "/path/to/mcp-victoriametrics",
      "env": {
        "VM_INSTANCE_ENTRYPOINT": "<YOUR_VM_INSTANCE>",
        "VM_INSTANCE_TYPE": "<YOUR_VM_INSTANCE_TYPE>",
        "VM_INSTANCE_BEARER_TOKEN": "<YOUR_VM_BEARER_TOKEN>"
      }
    }
  }
}

See VS Code MCP docs for more info.

Zed

Add the following to your Zed config file:

  "context_servers": {
    "victoriametrics": {
      "command": {
        "path": "/path/to/mcp-victoriametrics",
        "args": [],
        "env": {
          "VM_INSTANCE_ENTRYPOINT": "<YOUR_VM_INSTANCE>",
          "VM_INSTANCE_TYPE": "<YOUR_VM_INSTANCE_TYPE>",
          "VM_INSTANCE_BEARER_TOKEN": "<YOUR_VM_BEARER_TOKEN>"
        }
      },
      "settings": {}
    }
  }

See Zed MCP docs for more info.

JetBrains IDEs

  • Open Settings -> Tools -> AI Assistant -> Model Context Protocol (MCP).
  • Click Add (+)
  • Select As JSON
  • Put the following to the input field:
{
  "mcpServers": {
    "victoriametrics": {
      "command": "/path/to/mcp-victoriametrics",
      "env": {
        "VM_INSTANCE_ENTRYPOINT": "<YOUR_VM_INSTANCE>",
        "VM_INSTANCE_TYPE": "<YOUR_VM_INSTANCE_TYPE>",
        "VM_INSTANCE_BEARER_TOKEN": "<YOUR_VM_BEARER_TOKEN>"
      }
    }
  }
}

Windsurf

Add the following to your Windsurf MCP config file.

{
  "mcpServers": {
    "victoriametrics": {
      "command": "/path/to/mcp-victoriametrics",
      "env": {
        "VM_INSTANCE_ENTRYPOINT": "<YOUR_VM_INSTANCE>",
        "VM_INSTANCE_TYPE": "<YOUR_VM_INSTANCE_TYPE>",
        "VM_INSTANCE_BEARER_TOKEN": "<YOUR_VM_BEARER_TOKEN>"
      }
    }
  }
}

See Windsurf MCP docs for more info.

Using Docker instead of binary

You can run VictoriaMetrics MCP server using Docker instead of local binary.

You should replace run command in configuration examples above in the following way:

{
  "mcpServers": {
    "victoriametrics": {
      "command": "docker",
        "args": [
          "run",
          "-i", "--rm",
          "-e", "VM_INSTANCE_ENTRYPOINT",
          "-e", "VM_INSTANCE_TYPE",
          "-e", "VM_INSTANCE_BEARER_TOKEN",
          "ghcr.io/victoriametrics-community/mcp-victoriametrics",
        ],
      "env": {
        "VM_INSTANCE_ENTRYPOINT": "<YOUR_VM_INSTANCE>",
        "VM_INSTANCE_TYPE": "<YOUR_VM_INSTANCE_TYPE>",
        "VM_INSTANCE_BEARER_TOKEN": "<YOUR_VM_BEARER_TOKEN>"
      }
    }
  }
}

Usage

After installing and configuring the MCP server, you can start using it with your favorite MCP client.

You can start dialog with AI assistant from the phrase:

Use MCP VictoriaMetrics in the following answers

But it's not required, you can just start asking questions and the assistant will automatically use the tools and documentation to provide you with the best answers. Just take a look into Dialog example section for better understanding what you can do with it.

Toolset

MCP VictoriaMetrics provides numerous tools for interacting with your VictoriaMetrics instance.

Here's a list of common available tools:

ToolDescription
queryExecute instant PromQL/MetricsQL queries
query_rangeExecute range PromQL/MetricsQL queries over a time period
metricsList available metrics
labelsList available label names
label_valuesList values for a specific label
seriesList available time series
exportExport raw time series data to JSON or CSV
rulesView alerting and recording rules
alertsView current alerts (firing and pending)
flagsView non-default flags of the VictoriaMetrics instance
metric_statisticsGet metrics usage (in queries) statistics
active_queriesView currently executing queries
top_queriesView most frequent or slowest queries
tsdb_statusView TSDB cardinality statistics
tenantsList available tenants in multi-tenant cluster setup
documentationSearch in embedded VictoriaMetrics documentation
metric_relabel_debugDebug Prometheus-compatible relabeling rules
downsampling_filters_debugDebug downsampling configuration
retention_filters_debugDebug retention filters configuration
prettify_queryPrettify and format PromQL/MetricsQL queries
explain_queryParse PromQL/MetricsQL queries and explain how it works
test_rulesUnit-test alerting and recording rules using vmalert tool

Here are some additional tools that are available for VictoriaMetrics Cloud (with specifying VMC_API_KEY parameter) users:

ToolDescription
deploymentsList available deployments in VictoriaMetrics Cloud
cloud_providersList available cloud providers in VictoriaMetrics Cloud
regionsList available cloud providers regions in VictoriaMetrics Cloud
tiersList available deployment tiers in VictoriaMetrics Cloud
access_tokensList available deployment access tokens in VictoriaMetrics Cloud
rule_filenamesList available alerting and recording rule filenames in VictoriaMetrics Cloud
rule_fileContent of a specific alerting or recording rule file in VictoriaMetrics Cloud

Prompts

The server includes pre-defined prompts for common tasks.

These are just examples at the moment, the prompt library will be added to in the future:

PromptDescription
unused_metricsFind metrics that are never queried and create relabel configs to stop ingesting them
documentationSearch VictoriaMetrics documentation for specific topics
rarely_used_metrics_with_high_cardinalityIdentify high-cardinality metrics that are rarely queried

Dialog example

This is just example of dialog with Assistant using MCP Server that can help to understand how it can work (but's it's real dialog with using the Victoria Metrics MCP server with Claude Desktop and configured to public playground instance):

  • You: What tenants are available in my VM?

  • Assistant In your VictoriaMetrics instance, the following tenants are available 0:0 (default tenant), 1:0, 10:0, 100:0

  • You: From now on, please use only default tenant

  • Assistant: I understand. From now on, I'll use only the default tenant (0:0) for all operations with your VictoriaMetrics instance.

  • You: Which metrics do we have?

  • Assistant: There are many metrics available in your VictoriaMetrics instance. The list includes metrics from various systems and components, such as:

    • System metrics (node_, machine_, etc.),
    • Kubernetes metrics (kube_, container_, apiserver_, etc.),
    • VictoriaMetrics-specific metrics (vm_, vmagent_, vmalert_, etc.),
    • Go runtime metrics (go_),
    • Prometheus-related metrics (scrape_, promhttp_),
    • Various application metrics (argocd_, alertmanager_).

    This is a comprehensive list of over 2000 metrics that are available for querying. Is there a specific type of metric you're interested in exploring further?

  • You: Ok, let's work with kubernetes metrics from now on. Please show me which of these metrics never queried?

  • Assistant: Based on the results, I can see that there are two Kubernetes metrics that have never been queried (query count is 0):

    • kube_persistentvolumeclaim_deletion_timestamp - This metric provides information about when PersistentVolumeClaims were marked for deletion.
    • kube_pod_init_container_status_last_terminated_reason - This metric shows the reason why init containers in pods were last terminated.

    These metrics might be valuable for monitoring and troubleshooting purposes:

    • The PVC deletion timestamp metric could help track deletion patterns or identify stuck PVCs.
    • The init container termination reason metric could help identify common failure patterns in your initialization containers.

    Would you like me to explain more about what these metrics track or suggest some useful queries to start monitoring them?

  • You: Yes, please. And also, can you create relabeling rules to stop ingesting these metrics?

  • Assistant: I'll create a relabel configuration to exclude these never-queried metrics from collection. This will help reduce storage and processing overhead. Here's the relabel configuration you can add to your VictoriaMetrics scrape configuration:

    relabel_configs:
    - action: drop
      source_labels: [__name__]
      regex: kube_persistentvolumeclaim_deletion_timestamp|kube_pod_init_container_status_last_terminated_reason
    

    This configuration will:

    1. Look at each metric's name (the __name__ label)
    2. Check if it matches either of the specified metrics
    3. Drop the metrics that match, preventing them from being collected and stored

    You can add this to your VictoriaMetrics configuration file or to the relevant scrape job configurations in your Prometheus configuration if you're using Prometheus as a scraper for VictoriaMetrics. If you want to test this relabel configuration before implementing it, you can use the metric relabel debug tool to see how it would affect your metrics.

  • You: Yes, please debug this relabel config on real time series to see how it would affect my metrics

  • Assistant: Perfect! The relabel config works as expected:

    • The first two test cases show that both kube_persistentvolumeclaim_deletion_timestamp and kube_pod_init_container_status_last_terminated_reason metrics are correctly dropped (resulting labels are empty {}).
    • The third test case confirms that other metrics like kube_pod_status_phase are preserved and not affected by our drop rule.

    You can confidently add this relabel configuration to your VictoriaMetrics or Prometheus configuration to stop collecting these unused metrics. This will save storage space and reduce processing overhead without affecting your current monitoring capabilities.

  • You: But i'm using vmagent resource of VictoriaMetrics operator, could you please give me part of specification for my agent with this relabel config?

  • Assistant: You're right, since you're using the VictoriaMetrics Operator and the VMAgent custom resource, the configuration needs to be in the appropriate format for that CR. Here's how you can add the relabel configuration to your VMAgent specification:

    apiVersion: operator.victoriametrics.com/v1beta1
    kind: VMAgent
    metadata:
    name: your-vmagent-name
    spec:
      # ... your existing configuration ...
      
      # Add this section to configure global metric relabeling
      inlineRelabelConfig:
        - action: drop
          sourceLabels: ["__name__"]
          regex: kube_persistentvolumeclaim_deletion_timestamp|kube_pod_init_container_status_last_terminated_reason
      
      # ... rest of your configuration ...
    

    Key points:

    • In the VMAgent spec, you use inlineRelabelConfig for inline relabeling rules
    • The field is called sourceLabels (not source_labels as in Prometheus config)
    • This is a global relabeling that will be applied to all metrics before they're stored
  • You: I decided not to remove those metrics, but just lower the sample rate for them using Downsampling. What downsampling parameters i have now?

  • Assistant: I can see that you currently have the following downsampling configuration:

    -downsampling.period="7d:1m,30d:5m,180d:30m"
    

    This means:

    • Data newer than 7 days is kept at its original resolution
    • Data between 7-30 days old is downsampled to 1-minute resolution
    • Data between 30-180 days old is downsampled to 5-minute resolution
    • Data older than 180 days is downsampled to 30-minute resolution

    Now, if you want to add specific downsampling rules for those rarely-queried metrics, you can use the downsampling filters debug tool to test how new rules would work. Let's test a configuration where we apply more aggressive downsampling to those specific metrics:

    ...getting series and using downsampling filter debugger...

    Great! Now our test is working correctly. Based on the current configuration and our testing, here's a suggested downsampling configuration for your VMAgent CR that would apply more aggressive downsampling to those rarely-queried metrics:

    apiVersion: operator.victoriametrics.com/v1beta1
    kind: VMCluster
    metadata:
      name: monitoring-vm-cluster
    spec:
      # ... your existing configuration ...
      
      # Update the vmstorage section with the new downsampling parameters
      vmstorage:
        extraArgs:
          downsampling.period: '{__name__=\"kube_persistentvolumeclaim_deletion_timestamp\"}:1d:10m,{__name__=\"kube_pod_init_container_status_last_terminated_reason\"}:1d:10m,7d:1m,30d:5m,180d:30m'
      
      # ... rest of your configuration ...
    

    This configuration:

    - Applies 10-minute resolution downsampling for the rarely-queried metrics after just 1 day - Maintains your existing downsampling policy for all other metrics

    The order of the downsampling rules is important - more specific rules should be listed first.

During this dialog, the assistant was using the following tools:

  • tenants to get the list of available tenants
  • documentation to get information about functionality and data formats
  • metrics to get the list of available metrics
  • metrics_staistics to get the information about metrics usage
  • series to get the time series for debugging
  • metric_relabel_debug to debug relabeling rules
  • flags to get the information about instance parameters
  • downsampling_filters_debug to debug downsampling configuration

But you can use any other tools and combine them in your own way.

Monitoring

In SSE and HTTP modes the MCP Server provides metrics in Prometheus format (see endpoints) and you can find in repo simple grafana dashboard for these metrics.

Roadmap

Disclaimer

AI services and agents along with MCP servers like this cannot guarantee the accuracy, completeness and reliability of results. You should double check the results obtained with AI.

The quality of the MCP Server and its responses depends very much on the capabilities of your client and the quality of the model you are using.

Contributing

Contributions to the MCP VictoriaMetrics project are welcome!

Please feel free to submit issues, feature requests, or pull requests.

Install

No configuration available

This service may require manual configuration, please check the details on the left

Related

Related projects feature coming soon

Will recommend related projects based on sub-categories