Personal tools
Skip to content. | Skip to navigation
statsd_exporter receives StatsD-style metrics and exports them as Prometheus metrics. Overview With StatsD To pipe metrics from an existing StatsD environment into Prometheus, configure StatsD's repeater backend to repeat all received metrics to a statsd_exporter process. This exporter translates StatsD metrics to Prometheus metrics via configured mapping rules. +----------+ +-------------------+ +--------------+ | StatsD |---(UDP/TCP repeater)--->| statsd_exporter |<---(scrape /metrics)---| Prometheus | +----------+ +-------------------+ +--------------+ Without StatsD Since the StatsD exporter uses the same line protocol as StatsD itself, you can also configure your applications to send StatsD metrics directly to the exporter. In that case, you don't need to run a StatsD server anymore. We recommend this only as an intermediate solution and recommend switching to native Prometheus instrumentation in the long term. Tagging Extensions The exporter supports Librato, InfluxDB, and DogStatsD-style tags, which will be converted into Prometheus labels. For Librato-style tags, they must be appended to the metric name with a delimiting #, as so: metric.name#tagName=val,tag2Name=val2:0|c See the statsd-librato-backend README for a more complete description. For InfluxDB-style tags, they must be appended to the metric name with a delimiting comma, as so: metric.name,tagName=val,tag2Name=val2:0|c See this InfluxDB blog post for a larger overview. For DogStatsD-style tags, they're appended as a |# delimited section at the end of the metric, as so: metric.name:0|c|#tagName=val,tag2Name=val2 See Tags in the DogStatsD documentation for the concept description and Datagram Format. If you encounter problems, note that this tagging style is incompatible with the original statsd implementation. Be aware: If you mix tag styles (e.g., Librato/InfluxDB with DogStatsD), the exporter will consider this an error and the sample will be discarded. Also, tags without values (#some_tag) are not supported and will be ignored.
A utility for managing Jsonnet dashboards against the Grafana API.
k6 is a modern load testing tool, building on our years of experience in the load and performance testing industry. It provides a clean, approachable scripting API, local and cloud execution, and flexible configuration.
Loki: like Prometheus, but for logs. Loki is a horizontally-scalable, highly-available, multi-tenant log aggregation system inspired by Prometheus. It is designed to be very cost effective and easy to operate. It does not index the contents of the logs, but rather a set of labels for each log stream. Compared to other log aggregation systems, Loki: does not do full text indexing on logs. By storing compressed, unstructured logs and only indexing metadata, Loki is simpler to operate and cheaper to run. indexes and groups log streams using the same labels you’re already using with Prometheus, enabling you to seamlessly switch between metrics and logs using the same labels that you’re already using with Prometheus. is an especially good fit for storing Kubernetes Pod logs. Metadata such as Pod labels is automatically scraped and indexed. has native support in Grafana (needs Grafana v6.0). A Loki-based logging stack consists of 3 components: promtail is the agent, responsible for gathering logs and sending them to Loki. loki is the main server, responsible for storing logs and processing queries. Grafana for querying and displaying the logs. Loki is like Prometheus, but for logs: we prefer a multidimensional label-based approach to indexing, and want a single-binary, easy to operate system with no dependencies. Loki differs from Prometheus by focusing on logs instead of metrics, and delivering logs via push, instead of pull.
If you are running on Grafana Cloud, use: $ export GRAFANA_ADDR=https://logs-us-west1.grafana.net $ export GRAFANA_USERNAME=<username> $ export GRAFANA_PASSWORD=<password> Otherwise you can point LogCLI to a local instance directly without needing a username and password: $ export GRAFANA_ADDR=http://localhost:3100 Note: If you are running Loki behind a proxy server and you have authentication configured, you will also have to pass in GRAFANA_USERNAME and GRAFANA_PASSWORD accordingly. $ logcli labels job https://logs-dev-ops-tools1.grafana.net/api/prom/label/job/values cortex-ops/consul cortex-ops/cortex-gw
Promtail is an agent which ships the contents of local logs to a private Loki instance or Grafana Cloud. It is usually deployed to every machine that has applications needed to be monitored. It primarily: Discovers targets Attaches labels to log streams Pushes them to the Loki instance. Currently, Promtail can tail logs from two sources: local log files and the systemd journal
Grafana Mimir is an open source software project that provides a scalable long-term storage for Prometheus. Some of the core strengths of Grafana Mimir include: Easy to install and maintain: Grafana Mimir’s extensive documentation, tutorials, and deployment tooling make it quick to get started. Using its monolithic mode, you can get Grafana Mimir up and running with just one binary and no additional dependencies. Once deployed, the best-practice dashboards, alerts, and playbooks packaged with Grafana Mimir make it easy to monitor the health of the system. Massive scalability: You can run Grafana Mimir's horizontally-scalable architecture across multiple machines, resulting in the ability to process orders of magnitude more time series than a single Prometheus instance. Internal testing shows that Grafana Mimir handles up to 1 billion active time series. Global view of metrics: Grafana Mimir enables you to run queries that aggregate series from multiple Prometheus instances, giving you a global view of your systems. Its query engine extensively parallelizes query execution, so that even the highest-cardinality queries complete with blazing speed. Cheap, durable metric storage: Grafana Mimir uses object storage for long-term data storage, allowing it to take advantage of this ubiquitous, cost-effective, high-durability technology. It is compatible with multiple object store implementations, including AWS S3, Google Cloud Storage, Azure Blob Storage, OpenStack Swift, as well as any S3-compatible object storage. High availability: Grafana Mimir replicates incoming metrics, ensuring that no data is lost in the event of machine failure. Its horizontally scalable architecture also means that it can be restarted, upgraded, or downgraded with zero downtime, which means no interruptions to metrics ingestion or querying. Natively multi-tenant: Grafana Mimir’s multi-tenant architecture enables you to isolate data and queries from independent teams or business units, making it possible for these groups to share the same cluster. Advanced limits and quality-of-service controls ensure that capacity is shared fairly among tenants.
Addition tsdb management tools for Mimir
Collect and analyze alerts from multiple monitoring systems On-call rotations based on schedules Automatic escalations Phone calls, SMS, Slack, Telegram notifications
This Grafana plugin for Performance Co-Pilot includes datasources for scalable time series from pmseries(1) and Redis, live PCP metrics and bpftrace scripts from pmdabpftrace(1), as well as several dashboards.