- Blog /
- Not All Telemetry Requires Premium Pricing

Observability in software is often framed as a choice between self-hosted and SaaS: manage it yourself, or pay a vendor to handle your data.
Both self-hosted and SaaS approaches have their merits, but assuming you must choose one exclusively over the other leads to poor trade-offs: either overcommitting to an all-in-one SaaS despite spiraling costs, or fully self-hosting when it’s unnecessary.
Observability shouldn’t be a binary choice. A hybrid observability strategy can let you keep the signals that matter most in your favorite SaaS while running a self-hosted backend for high‑volume telemetry that doesn’t need to live in a premium-priced pipeline. The tricky part is knowing where to draw the line, and this article will help you figure out exactly that.
A 2026 analysis of 47 companies using managed SaaS observability reported that initial estimates underestimate total cost once host count, ingestion, retention, and product add-ons are included.
As we onboard more systems, increase log verbosity, and add labels, observability expenses can creep up to the point where they rival the cost of the infrastructure being monitored.
Spiraling costs force difficult choices on us: should we log less data or shorten retention? Do we really need to retain telemetry for services and non-critical business functions that don’t drive revenue? Who knows?
The problem is that nobody knows; it’s impossible to know what you’ll need on a bad day. A good reminder of the dangers of flying blind is GitLab’s 18-hour outage. Only after accidentally deleting a primary database did GitLab’s engineers discover that the backup system had been failing silently for a long time, unnoticed. There were no dashboards or metrics tracking backup success rates, completion times, or data integrity — only the email alerts that nobody saw.
“While notifications are enabled for any cronjobs that error, these notifications are sent by email. For GitLab.com we use DMARC. Unfortunately DMARC was not enabled for the cronjob emails, resulting in them being rejected by the receiver. This means we were never aware of the backups failing, until it was too late.”
Instead of deciding which data to keep and drop, divide telemetry into two tiers: decision-tier and diagnostic-tier.
Decision-tier signals directly impact customer experience or revenue, such as SLOs for key user journeys, business KPIs, uptime, latency, error budgets, checkout success, and payment failures.
Diagnostic-tier signals support running and debugging the system, but have indirect or episodic business impact. Useful for on-call engineers and performance work.
To determine if a signal is decision or diagnostic, you can evaluate it across several dimensions:
Once you start thinking of signals as decision and diagnostic tiers, the next step is to plan your hybrid setup. For this, we set up a second cost‑efficient, high‑performance observability backend, such as the VictoriaMetrics Stack, which lets you afford far more retention and detail without paying per‑metric premiums.
The VictoriaMetrics Observability Stack gives you an open‑source home for all three pillars of observability:
There are two key ways to practically implement this hybrid model: Split by Signal and Centralize and Forward. Let’s see how each works next.
In the “Split by Signal” strategy, we split the signal destination at the collector layer. Every exporter, agent, or collector sends decision signals to your SaaS and the rest to the VictoriaMetrics Observability Stack, using routing rules to send each signal to their assigned backend.

“Split by Signal” can be implemented piecemeal, starting small and switching your diagnostic metrics a few at a time as needed. It’s a good way to try out what self-hosting your observability platform feels like.
The flip side of this approach is that switching signals between backends is not as easy as flipping a switch, as it usually involves updating multiple configs. The other thing to consider is that we now have two separate dashboards, which means we must remember where each system lives, making it harder to correlate events across systems.
With this approach, centralize all signals in the VictoriaMetrics Observability Stack and then forward only the decision-tier metrics to our SaaS provider of choice.

“Centralize and Forward” takes more work up front to set up. You can read a step-by-step guide by Samor Isa. But once running, it provides many benefits:
To get started with the VictoriaMetrics Observability Stack, here’s a high-level step-by-step guide:
You can always find help in our docs page, the community Slack, or in Telegram. Get in touch, we love to help.
The goal isn’t to rip and replace your favorite tool; it’s to keep it focused on what truly matters while letting a lean backend bear the weight of the rest. Considering how expensive SaaS observability platforms can be, a hybrid setup can be a great way to manage costs without sacrificing valuable data.
Reduce observability costs with hybrid strategies: prioritize revenue-driving signals in SaaS, self-host high-volume telemetry. Cut bills 3-12x without losing visibility.
Q1 2026 brought incremental but important updates to VictoriaMetrics Anomaly Detection: UI improvements, AI assistance inside the UI, a public traces playground, new false-positive reduction controls, and continued resource optimizations.
VictoriaMetrics participated in KubeCon + CloudNativeCon Europe 2026 in Amsterdam. The team delivered multiple talks covering platform design, Kubernetes observability, and distributed tracing optimization. A real-world case study from Miro showcased a cost-efficient, AZ-aware observability architecture built with VictoriaMetrics. With a 15-person team on site, the booth saw strong interest from users tackling scaling, cost, and performance challenges. The company also hosted its first community after-party, “After Deploy,” co-organized with Varnish and Shipfox, extending discussions beyond the conference.
Q1 2026 brought VictoriaLogs GA, a hosted MCP Server, a brand new cost calculator, a major expansion of alerting rule presets with a new editor, infrastructure improvements, notifications via generic webhooks and a few things we are cooking.