- Blog /
- vmanomaly Deep Dive: Smarter Alerting with AI (Tech Talk Companion)

I was thrilled to host our latest tech talk, where we got to do a deep dive into vmanomaly with the best possible guests: Fred Navruzov, the actual team lead for the product, and Co-Host, Matthias Palmersheim.
We covered a ton of ground, from high-level concepts to the nitty-gritty of configuration. For everyone who couldn’t make it, I wanted to share my personal recap of the most important technical takeaways from our conversation.
A topic that always comes up is alert fatigue. We’ve all seen those alerting rule sets that become pure “spaghetti code” — so complex and interconnected that nobody wants to touch them. The core of the problem is that traditional static thresholds just don’t have enough context.
As Fred explained, these rules fail when faced with:
This is the problem vmanomaly was built to solve. It uses ML to learn what “normal” looks like for your systems, including all their seasonal quirks.
I love this distinction. vmanomaly doesn’t replace your alerting engine; it supercharges it.
Think of it this way:
anomaly_score.This means your complex, hard-to-maintain alerting rules can be replaced with one beautifully simple expression in vmalert: anomaly_score > 1. That’s it. Now you’re alerting on a true deviation from the norm, not just an arbitrary number.
Fred walked us through some recent architectural enhancements that make vmanomaly ready for serious production workloads.
While the models are smart, the real power comes when you apply your own business logic. We had a great discussion about how you, the engineer, can fine-tune the output.
Your main toolkit includes parameters like:
We got a fantastic question about how to handle gaps in data — for instance, if a device goes offline. The consensus was a two-part strategy:
This was one of my favorite talks to host so far. It’s clear that vmanomaly is an incredibly powerful tool for adding an intelligent layer to your monitoring strategy.
To get started, I highly recommend checking out the official docs, especially the pages on the self-monitoring dashboard and the Grafana dashboard presets.
Thanks so much to Fred, Matthias, and everyone who joined us live. We’ll see you at the end of August for the next one!
Reduce observability costs with hybrid strategies: prioritize revenue-driving signals in SaaS, self-host high-volume telemetry. Cut bills 3-12x without losing visibility.
Q1 2026 brought incremental but important updates to VictoriaMetrics Anomaly Detection: UI improvements, AI assistance inside the UI, a public traces playground, new false-positive reduction controls, and continued resource optimizations.
VictoriaMetrics participated in KubeCon + CloudNativeCon Europe 2026 in Amsterdam. The team delivered multiple talks covering platform design, Kubernetes observability, and distributed tracing optimization. A real-world case study from Miro showcased a cost-efficient, AZ-aware observability architecture built with VictoriaMetrics. With a 15-person team on site, the booth saw strong interest from users tackling scaling, cost, and performance challenges. The company also hosted its first community after-party, “After Deploy,” co-organized with Varnish and Shipfox, extending discussions beyond the conference.
Q1 2026 brought VictoriaLogs GA, a hosted MCP Server, a brand new cost calculator, a major expansion of alerting rule presets with a new editor, infrastructure improvements, notifications via generic webhooks and a few things we are cooking.