Personal tools
Skip to content. | Skip to navigation
Prometheus exporter for PostgreSQL server metrics. Flags help Show context-sensitive help (also try --help-long and --help-man). web.listen-address Address to listen on for web interface and telemetry. Default is :9187. web.telemetry-path Path under which to expose metrics. Default is /metrics. disable-default-metrics Use only metrics supplied from queries.yaml via --extend.query-path. disable-settings-metrics Use the flag if you don't want to scrape pg_settings. auto-discover-databases Whether to discover the databases on a server dynamically. extend.query-path Path to a YAML file containing custom queries to run. Check out queries.yaml for examples of the format. dumpmaps Do not run - print the internal representation of the metric maps. Useful when debugging a custom queries file. constantLabels Labels to set in all metrics. A list of label=value pairs, separated by commas. version Show application version. exclude-databases A list of databases to remove when autoDiscoverDatabases is enabled. log.level Set logging level: one of debug, info, warn, error, fatal log.format Set the log output target and format. e.g. logger:syslog?appname=bob&local=7 or logger:stdout?json=true Defaults to logger:stderr. Environment Variables The following environment variables configure the exporter: DATA_SOURCE_NAME the default legacy format. Accepts URI form and key=value form arguments. The URI may contain the username and password to connect with. DATA_SOURCE_URI an alternative to DATA_SOURCE_NAME which exclusively accepts the hostname without a username and password component. For example, my_pg_hostname or my_pg_hostname?sslmode=disable. DATA_SOURCE_URI_FILE The same as above but reads the URI from a file. DATA_SOURCE_USER When using DATA_SOURCE_URI, this environment variable is used to specify the username. DATA_SOURCE_USER_FILE The same, but reads the username from a file. DATA_SOURCE_PASS When using DATA_SOURCE_URI, this environment variable is used to specify the password to connect with. DATA_SOURCE_PASS_FILE The same as above but reads the password from a file. PG_EXPORTER_WEB_LISTEN_ADDRESS Address to listen on for web interface and telemetry. Default is :9187. PG_EXPORTER_WEB_TELEMETRY_PATH Path under which to expose metrics. Default is /metrics. PG_EXPORTER_DISABLE_DEFAULT_METRICS Use only metrics supplied from queries.yaml. Value can be true or false. Default is false. PG_EXPORTER_DISABLE_SETTINGS_METRICS Use the flag if you don't want to scrape pg_settings. Value can be true or false. Defauls is false. PG_EXPORTER_AUTO_DISCOVER_DATABASES Whether to discover the databases on a server dynamically. Value can be true or false. Defauls is false. PG_EXPORTER_EXTEND_QUERY_PATH Path to a YAML file containing custom queries to run. Check out queries.yaml for examples of the format. PG_EXPORTER_CONSTANT_LABELS Labels to set in all metrics. A list of label=value pairs, separated by commas. PG_EXPORTER_EXCLUDE_DATABASES A comma-separated list of databases to remove when autoDiscoverDatabases is enabled. Default is empty string. Settings set by environment variables starting with PG_ will be overwritten by the corresponding CLI flag if given.
Prometheus exporter for RabbitMQ metrics.
Prometheus Exporter for Redis Metrics. Supports ValKey and Redis 2.x, 3.x, 4.x, 5.x and 6.x
Prometheus Exporter Squid
statsd_exporter receives StatsD-style metrics and exports them as Prometheus metrics. Overview With StatsD To pipe metrics from an existing StatsD environment into Prometheus, configure StatsD's repeater backend to repeat all received metrics to a statsd_exporter process. This exporter translates StatsD metrics to Prometheus metrics via configured mapping rules. +----------+ +-------------------+ +--------------+ | StatsD |---(UDP/TCP repeater)--->| statsd_exporter |<---(scrape /metrics)---| Prometheus | +----------+ +-------------------+ +--------------+ Without StatsD Since the StatsD exporter uses the same line protocol as StatsD itself, you can also configure your applications to send StatsD metrics directly to the exporter. In that case, you don't need to run a StatsD server anymore. We recommend this only as an intermediate solution and recommend switching to native Prometheus instrumentation in the long term. Tagging Extensions The exporter supports Librato, InfluxDB, and DogStatsD-style tags, which will be converted into Prometheus labels. For Librato-style tags, they must be appended to the metric name with a delimiting #, as so: metric.name#tagName=val,tag2Name=val2:0|c See the statsd-librato-backend README for a more complete description. For InfluxDB-style tags, they must be appended to the metric name with a delimiting comma, as so: metric.name,tagName=val,tag2Name=val2:0|c See this InfluxDB blog post for a larger overview. For DogStatsD-style tags, they're appended as a |# delimited section at the end of the metric, as so: metric.name:0|c|#tagName=val,tag2Name=val2 See Tags in the DogStatsD documentation for the concept description and Datagram Format. If you encounter problems, note that this tagging style is incompatible with the original statsd implementation. Be aware: If you mix tag styles (e.g., Librato/InfluxDB with DogStatsD), the exporter will consider this an error and the sample will be discarded. Also, tags without values (#some_tag) are not supported and will be ignored.
A utility for managing Jsonnet dashboards against the Grafana API.
k6 is a modern load testing tool, building on our years of experience in the load and performance testing industry. It provides a clean, approachable scripting API, local and cloud execution, and flexible configuration.
Loki: like Prometheus, but for logs. Loki is a horizontally-scalable, highly-available, multi-tenant log aggregation system inspired by Prometheus. It is designed to be very cost effective and easy to operate. It does not index the contents of the logs, but rather a set of labels for each log stream. Compared to other log aggregation systems, Loki: does not do full text indexing on logs. By storing compressed, unstructured logs and only indexing metadata, Loki is simpler to operate and cheaper to run. indexes and groups log streams using the same labels you’re already using with Prometheus, enabling you to seamlessly switch between metrics and logs using the same labels that you’re already using with Prometheus. is an especially good fit for storing Kubernetes Pod logs. Metadata such as Pod labels is automatically scraped and indexed. has native support in Grafana (needs Grafana v6.0). A Loki-based logging stack consists of 3 components: promtail is the agent, responsible for gathering logs and sending them to Loki. loki is the main server, responsible for storing logs and processing queries. Grafana for querying and displaying the logs. Loki is like Prometheus, but for logs: we prefer a multidimensional label-based approach to indexing, and want a single-binary, easy to operate system with no dependencies. Loki differs from Prometheus by focusing on logs instead of metrics, and delivering logs via push, instead of pull.
If you are running on Grafana Cloud, use: $ export GRAFANA_ADDR=https://logs-us-west1.grafana.net $ export GRAFANA_USERNAME=<username> $ export GRAFANA_PASSWORD=<password> Otherwise you can point LogCLI to a local instance directly without needing a username and password: $ export GRAFANA_ADDR=http://localhost:3100 Note: If you are running Loki behind a proxy server and you have authentication configured, you will also have to pass in GRAFANA_USERNAME and GRAFANA_PASSWORD accordingly. $ logcli labels job https://logs-dev-ops-tools1.grafana.net/api/prom/label/job/values cortex-ops/consul cortex-ops/cortex-gw
Promtail is an agent which ships the contents of local logs to a private Loki instance or Grafana Cloud. It is usually deployed to every machine that has applications needed to be monitored. It primarily: Discovers targets Attaches labels to log streams Pushes them to the Loki instance. Currently, Promtail can tail logs from two sources: local log files and the systemd journal