Performance optimization techniques in time series databases: function caching

Performance optimization techniques in time series databases: function caching

Share: Share on LinkedIn Share on X (Twitter)

This blog post is also available as a recorded talk with slides.

Table of Contents

Performance optimization techniques in time series databases:


Relabeling is an important feature that allows users to modify metadata (labels) of scraped metrics before they ever make it to the database.

As an example, some of your scrape targets may generate metric labels with underscores (_), and some of your targets may generate labels with hyphens (-). Relabeling allows you to make this consistent, making database queries easier to write:'

An example of relabeling rule to replace hyphens with underscores. You can play with VictoriaMetrics' relabeling functionality <a href='https://play.victoriametrics.com/select/accounting/1/6a716b0f-38bc-4856-90ce-448fd713e3fe/prometheus/graph/#/relabeling?config=-+action%3A+labelmap_all%0A++regex%3A+%22-%22%0A++replacement%3A+%22_%22&labels=%7B__name__%3D%22metric%22%2C+foo-bar-baz%3D%22qux%22%7D' target='_blank'>in our playground</a>. An example of relabeling rule to replace hyphens with underscores. You can play with VictoriaMetrics' relabeling functionality in our playground.

Relabeling, if defined, happens every time vmagent scrapes metrics from your targets, but as we’ve seen before, vmagent is likely to see the same metric label many times. That means if we once saw foo-bar-baz and changed it to foo_bar_baz, then it is very likely we’ll have to do the same transformation on the next scrape as well. In this case, caching the results of the relabeling function is likely to reduce CPU usage.

Internally, we implement caching for relabeling functions via struct called Transformer:

type Transformer struct {
    m sync.Map
    transformFunc func(s string) string
}

Transformer contains a sync.Map for thread-safe access to cached results, and a function transformFunc that will do the actual relabeling.

Transformer implements function Transform which we use during relabeling:

func (t *Transformer) Transform(s string) string {
    v, ok := t.m.Load(s)
    if ok {
         // Fast path - the transformed `s` is found in the cache.
         return v.(string)
    }
    // Slow path - transform `s` and store it in the cache.
    sTransformed := t.transformFunc(s)
    t.m.Store(s, sTransformed)
    return sTransformed
}

The Transform function first checks the cache using the Load function. If a cached result is found, then it returns the result from the cache. Otherwise, it will call transformFunc to do the transformation, store the result in the cache, and return it.

As an example, here’s a Transformer that replaces any character not allowed in Prometheus data model with an underscore:

// SanitizeName replaces unsupported by Prometheus chars
// in metric names and label names with _.
func SanitizeName(name string) string {
    return promSanitizer.Transform(name)
}

var promSanitizer = NewTransformer(func(s string) string {
    return unsupportedPromChars.ReplaceAllString(s, "_")
})

var unsupportedPromChars = regexp.MustCompile(`[^a-zA-Z0-9_:]`)

In the above example, promSanitizer is created using our Transformer constructor. This constructor creates a new sync.Map, and stores the reference to the passed function. Now we can use SanitizeName function in the code “hot path” to sanitize scraped label names.

Function result caching allows you to trade off reduced CPU time for increased memory usage in certain cases. It works best when caching CPU-heavy functions that take a limited amount of possible values. Examples of CPU-heavy functions include those that do string transforms or regex matching.

Summary

#

VictoriaMetrics uses function result caching for its relabeling feature, but doesn’t use it for caching database queries. In the case of database queries, the range of possible values is too large, and it’s likely our cache hit rate would be low. As with strings interning, functions results caching works the best if number of cached variants is limited, so you can achieve high cache hit rate.

Stay tuned for the new blog post in this series!

Leave a comment below or Contact Us if you have any questions!
comments powered by Disqus

You might also like:

Our latest updates across the VictoriaMetrics Observability ecosystem

The VictoriaMetrics ecosystem continues to evolve rapidly, and the latest updates bring meaningful improvements across metrics, logs, and traces. Read the announcement for details.

New Capacity Tiers in VictoriaMetrics Cloud

VictoriaMetrics Cloud introduces a reworked set of Single-Node Capacity Tiers, built from fresh benchmarking to provide a clear progression of compute power. The new tiers offer significantly more CPU and memory, narrower gaps between sizes, and consistent pricing.

Announcing 1B+ Downloads & Product Development With Logs, Traces, Metrics

This year saw us blast past the 1 billion downloads on Docker, which is fueled by our customer-centric approach to software development and the introduction of new open source projects, such as our new database for traces. Read this blog post to learn more about our 2025 milestones.

AI Agents Observability with OpenTelemetry and the VictoriaMetrics Stack

Learn how to add observability to AI agents using OpenTelemetry and the VictoriaMetrics Stack. This guide explains how to instrument popular LLM frameworks and visualize metrics, logs, and traces in Grafana.