How to Decommission a vmstorage Node from a VictoriaMetrics Cluster

How to Decommission a vmstorage Node from a VictoriaMetrics Cluster

Share: Share on LinkedIn Share on X (Twitter)

Problem

#

We need to remove a vmstorage node from VictoriaMetrics cluster gracefully. Every vmstorage node contains its own portion of data and removing the vmstorage node from the cluster creates gaps in the graph (because replication is out of scope).

Setup example

#

We have a VictoriaMetrics cluster with 2 vminsert, 2 vmselect and 3 vmstorage nodes. We want to gracefully remove vmstorage A from the cluster.

Solution One

#

  1. Remove vmstorage A from the vminsert list
  2. Wait for the retention period
  3. Remove vmstorage A from the cluster

Note: please expect higher resource usage on the existing vmstorage nodes (vmstorage B and vmstorage C), as they now need to handle all the incoming data.

Pros: Simple implementation

Cons: You may need to wait for a long period of time

Solution Two

#

  1. Remove vmstorage A from the vminsert list (same as in Solution One).
  2. Set up a dedicated vmselect node that knows only about the vmstorage node that we want to remove (vmstorage A). We need this vmselect node for migration data from vmstorage A to other vmstorage nodes in the cluster.
  3. Using vmctl native import/export reads data from vmselect for vmstorage A and writes data back to vminsert nodes. 4. This process creates duplicates.
  4. Turn on deduplication on vmselect nodes.
  5. Remove vmstorage A from the cluster.

Note: Please expect higher resource usage on the existing nodes (vmstorage B and vmstorage C), as they now need to handle all the incoming data.

Pros: Faster way to decommission a vmstorage node.

Cons: The process is more complex compared to solution One. The vmctl import/export process may require tuning if you migrate hundreds GB of data (or more).

Hint : downsampling reduces the amount of data in a cluster; after downsampling, the vmctl migration requires less data to transfer and less time.

We trust that this is helpful!

Please let us know how you get on or if you have any questions by submitting a comment below.

Leave a comment below or Contact Us if you have any questions!
comments powered by Disqus

You might also like:

What’s new in VictoriaMetrics Anomaly Detection (2025)

VictoriaMetrics Anomaly Detection has had a productive year with lots of user feedback that has had a major impact on product development. We’ve added improvements across the board: in core functionality, simplicity, performance, visualisation and AI integration. In addition to bug fixes and speedups, below is a list of what was accomplished in 2025.

VictoriaMetrics January 2026 Ecosystem Updates

January 2026 updates deliver quality of life improvements, performance optimizations, and tighter Kubernetes integration across the VictoriaMetrics Observability Stack.

VictoriaLogs Basics: What You Need to Know, with Examples & Visuals

Cluster mode in VictoriaLogs is not a separate build. It is the same victoria-logs binary started with different flags, so you can scale out without a migration step. Storage nodes persist data on disk, while gateway nodes can stay stateless by pointing to storage with -storageNode. It also ships with practical safety switches, like read-only protection when -storageDataPath runs low and optional partial results when a storage node is down.

What's New in VictoriaMetrics Cloud Q4 2025? New tiers, more deployment options, IaC and alerting rules.

In the last quarter of 2025, VictoriaMetrics Cloud brings many great features: New powerful Capacity Tiers, the expansion to the us-east-1 (N.Virginia) AWS region in the US, new Notification Groups, a Terraform provider to complete your IaC, 9 brand new Alerting Rule Integrations and much more.