You are here: Home

Modified items

All recently modified items, latest first.
RPMPackage bastion-product-389ds-3.0.1-0.2.20241224.lbn36.noarch
Our Subscription Manager utilises X509 certificates to define the product(s) a system is built upon/subscribed to. This is the subscription definition for 389-Ds-Base
RPMPackage grafana-tempo-client-2.7.0-0.1.gitb0da6b4.lbn36.x86_64
Client-side tools for Tempo.
RPMPackage grafana-tempo-2.7.0-0.1.gitb0da6b4.lbn36.x86_64
Grafana Tempo is an open source, easy-to-use and high-scale distributed tracing backend. Tempo is cost-efficient, requiring only object storage to operate, and is deeply integrated with Grafana, Prometheus, and Loki. Tempo can be used with any of the open source tracing protocols, including Jaeger, Zipkin, OpenCensus, Kafka, and OpenTelemetry. It supports key/value lookup only and is designed to work in concert with logs and metrics (exemplars) for discovery. Tempo is Jaeger, Zipkin, Kafka, OpenCensus and OpenTelemetry compatible. It ingests batches in any of the mentioned formats, buffers them and then writes them to Azure, GCS, S3 or local disk. As such it is robust, cheap and easy to operate!
RPMPackage grafana-pcp-3.2.0-3.fc36.x86_64
This Grafana plugin for Performance Co-Pilot includes datasources for scalable time series from pmseries(1) and Redis, live PCP metrics and bpftrace scripts from pmdabpftrace(1), as well as several dashboards.
RPMPackage grafana-oncall-1.14.3-1.lbn36.noarch
Collect and analyze alerts from multiple monitoring systems On-call rotations based on schedules Automatic escalations Phone calls, SMS, Slack, Telegram notifications
RPMPackage grafana-mimir-tools-2.15.0-0.1.git24e4281.lbn36.x86_64
Addition tsdb management tools for Mimir
RPMPackage grafana-mimir-2.15.0-0.1.git24e4281.lbn36.x86_64
Grafana Mimir is an open source software project that provides a scalable long-term storage for Prometheus. Some of the core strengths of Grafana Mimir include: Easy to install and maintain: Grafana Mimir’s extensive documentation, tutorials, and deployment tooling make it quick to get started. Using its monolithic mode, you can get Grafana Mimir up and running with just one binary and no additional dependencies. Once deployed, the best-practice dashboards, alerts, and playbooks packaged with Grafana Mimir make it easy to monitor the health of the system. Massive scalability: You can run Grafana Mimir's horizontally-scalable architecture across multiple machines, resulting in the ability to process orders of magnitude more time series than a single Prometheus instance. Internal testing shows that Grafana Mimir handles up to 1 billion active time series. Global view of metrics: Grafana Mimir enables you to run queries that aggregate series from multiple Prometheus instances, giving you a global view of your systems. Its query engine extensively parallelizes query execution, so that even the highest-cardinality queries complete with blazing speed. Cheap, durable metric storage: Grafana Mimir uses object storage for long-term data storage, allowing it to take advantage of this ubiquitous, cost-effective, high-durability technology. It is compatible with multiple object store implementations, including AWS S3, Google Cloud Storage, Azure Blob Storage, OpenStack Swift, as well as any S3-compatible object storage. High availability: Grafana Mimir replicates incoming metrics, ensuring that no data is lost in the event of machine failure. Its horizontally scalable architecture also means that it can be restarted, upgraded, or downgraded with zero downtime, which means no interruptions to metrics ingestion or querying. Natively multi-tenant: Grafana Mimir’s multi-tenant architecture enables you to isolate data and queries from independent teams or business units, making it possible for these groups to share the same cluster. Advanced limits and quality-of-service controls ensure that capacity is shared fairly among tenants.
RPMPackage grafana-loki-promtail-3.3.2-0.1.git23b5fc2.lbn36.x86_64
Promtail is an agent which ships the contents of local logs to a private Loki instance or Grafana Cloud. It is usually deployed to every machine that has applications needed to be monitored. It primarily: Discovers targets Attaches labels to log streams Pushes them to the Loki instance. Currently, Promtail can tail logs from two sources: local log files and the systemd journal
RPMPackage grafana-loki-logcli-3.3.2-0.1.git23b5fc2.lbn36.x86_64
If you are running on Grafana Cloud, use: $ export GRAFANA_ADDR=https://logs-us-west1.grafana.net $ export GRAFANA_USERNAME=<username> $ export GRAFANA_PASSWORD=<password> Otherwise you can point LogCLI to a local instance directly without needing a username and password: $ export GRAFANA_ADDR=http://localhost:3100 Note: If you are running Loki behind a proxy server and you have authentication configured, you will also have to pass in GRAFANA_USERNAME and GRAFANA_PASSWORD accordingly. $ logcli labels job https://logs-dev-ops-tools1.grafana.net/api/prom/label/job/values cortex-ops/consul cortex-ops/cortex-gw
RPMPackage grafana-loki-3.3.2-0.1.git23b5fc2.lbn36.x86_64
Loki: like Prometheus, but for logs. Loki is a horizontally-scalable, highly-available, multi-tenant log aggregation system inspired by Prometheus. It is designed to be very cost effective and easy to operate. It does not index the contents of the logs, but rather a set of labels for each log stream. Compared to other log aggregation systems, Loki: does not do full text indexing on logs. By storing compressed, unstructured logs and only indexing metadata, Loki is simpler to operate and cheaper to run. indexes and groups log streams using the same labels you’re already using with Prometheus, enabling you to seamlessly switch between metrics and logs using the same labels that you’re already using with Prometheus. is an especially good fit for storing Kubernetes Pod logs. Metadata such as Pod labels is automatically scraped and indexed. has native support in Grafana (needs Grafana v6.0). A Loki-based logging stack consists of 3 components: promtail is the agent, responsible for gathering logs and sending them to Loki. loki is the main server, responsible for storing logs and processing queries. Grafana for querying and displaying the logs. Loki is like Prometheus, but for logs: we prefer a multidimensional label-based approach to indexing, and want a single-binary, easy to operate system with no dependencies. Loki differs from Prometheus by focusing on logs instead of metrics, and delivering logs via push, instead of pull.
RPMPackage grafana-k6-0.56.0-0.1.git50afb99.lbn36.x86_64
k6 is a modern load testing tool, building on our years of experience in the load and performance testing industry. It provides a clean, approachable scripting API, local and cloud execution, and flexible configuration.
RPMPackage grafana-grizzly-0.7.1-1.lbn36.x86_64
A utility for managing Jsonnet dashboards against the Grafana API.
RPMPackage prometheus-statsd-exporter-0.26.0-1.lbn36.x86_64
statsd_exporter receives StatsD-style metrics and exports them as Prometheus metrics. Overview With StatsD To pipe metrics from an existing StatsD environment into Prometheus, configure StatsD's repeater backend to repeat all received metrics to a statsd_exporter process. This exporter translates StatsD metrics to Prometheus metrics via configured mapping rules. +----------+ +-------------------+ +--------------+ | StatsD |---(UDP/TCP repeater)--->| statsd_exporter |<---(scrape /metrics)---| Prometheus | +----------+ +-------------------+ +--------------+ Without StatsD Since the StatsD exporter uses the same line protocol as StatsD itself, you can also configure your applications to send StatsD metrics directly to the exporter. In that case, you don't need to run a StatsD server anymore. We recommend this only as an intermediate solution and recommend switching to native Prometheus instrumentation in the long term. Tagging Extensions The exporter supports Librato, InfluxDB, and DogStatsD-style tags, which will be converted into Prometheus labels. For Librato-style tags, they must be appended to the metric name with a delimiting #, as so: metric.name#tagName=val,tag2Name=val2:0|c See the statsd-librato-backend README for a more complete description. For InfluxDB-style tags, they must be appended to the metric name with a delimiting comma, as so: metric.name,tagName=val,tag2Name=val2:0|c See this InfluxDB blog post for a larger overview. For DogStatsD-style tags, they're appended as a |# delimited section at the end of the metric, as so: metric.name:0|c|#tagName=val,tag2Name=val2 See Tags in the DogStatsD documentation for the concept description and Datagram Format. If you encounter problems, note that this tagging style is incompatible with the original statsd implementation. Be aware: If you mix tag styles (e.g., Librato/InfluxDB with DogStatsD), the exporter will consider this an error and the sample will be discarded. Also, tags without values (#some_tag) are not supported and will be ignored.
RPMPackage prometheus-squid-exporter-1.10.3-0.1.git7693646.lbn36.x86_64
Prometheus Exporter Squid
RPMPackage prometheus-redis-exporter-1.44.0-0.1.git9f7b036.lbn36.x86_64
Prometheus Exporter for Redis Metrics. Supports Redis 2.x, 3.x, 4.x, 5.x and 6.x
RPMPackage prometheus-rabbitmq-exporter-1.0.0-0.4.git40d9c32.lbn36.x86_64
Prometheus exporter for RabbitMQ metrics.
RPMPackage prometheus-postgresql-exporter-0.16.0-0.1.gita324fe3.lbn36.x86_64
Prometheus exporter for PostgreSQL server metrics. Flags help Show context-sensitive help (also try --help-long and --help-man). web.listen-address Address to listen on for web interface and telemetry. Default is :9187. web.telemetry-path Path under which to expose metrics. Default is /metrics. disable-default-metrics Use only metrics supplied from queries.yaml via --extend.query-path. disable-settings-metrics Use the flag if you don't want to scrape pg_settings. auto-discover-databases Whether to discover the databases on a server dynamically. extend.query-path Path to a YAML file containing custom queries to run. Check out queries.yaml for examples of the format. dumpmaps Do not run - print the internal representation of the metric maps. Useful when debugging a custom queries file. constantLabels Labels to set in all metrics. A list of label=value pairs, separated by commas. version Show application version. exclude-databases A list of databases to remove when autoDiscoverDatabases is enabled. log.level Set logging level: one of debug, info, warn, error, fatal log.format Set the log output target and format. e.g. logger:syslog?appname=bob&local=7 or logger:stdout?json=true Defaults to logger:stderr. Environment Variables The following environment variables configure the exporter: DATA_SOURCE_NAME the default legacy format. Accepts URI form and key=value form arguments. The URI may contain the username and password to connect with. DATA_SOURCE_URI an alternative to DATA_SOURCE_NAME which exclusively accepts the hostname without a username and password component. For example, my_pg_hostname or my_pg_hostname?sslmode=disable. DATA_SOURCE_URI_FILE The same as above but reads the URI from a file. DATA_SOURCE_USER When using DATA_SOURCE_URI, this environment variable is used to specify the username. DATA_SOURCE_USER_FILE The same, but reads the username from a file. DATA_SOURCE_PASS When using DATA_SOURCE_URI, this environment variable is used to specify the password to connect with. DATA_SOURCE_PASS_FILE The same as above but reads the password from a file. PG_EXPORTER_WEB_LISTEN_ADDRESS Address to listen on for web interface and telemetry. Default is :9187. PG_EXPORTER_WEB_TELEMETRY_PATH Path under which to expose metrics. Default is /metrics. PG_EXPORTER_DISABLE_DEFAULT_METRICS Use only metrics supplied from queries.yaml. Value can be true or false. Default is false. PG_EXPORTER_DISABLE_SETTINGS_METRICS Use the flag if you don't want to scrape pg_settings. Value can be true or false. Defauls is false. PG_EXPORTER_AUTO_DISCOVER_DATABASES Whether to discover the databases on a server dynamically. Value can be true or false. Defauls is false. PG_EXPORTER_EXTEND_QUERY_PATH Path to a YAML file containing custom queries to run. Check out queries.yaml for examples of the format. PG_EXPORTER_CONSTANT_LABELS Labels to set in all metrics. A list of label=value pairs, separated by commas. PG_EXPORTER_EXCLUDE_DATABASES A comma-separated list of databases to remove when autoDiscoverDatabases is enabled. Default is empty string. Settings set by environment variables starting with PG_ will be overwritten by the corresponding CLI flag if given.
RPMPackage prometheus-node-exporter-1.7.0-1.lbn36.x86_64
Prometheus exporter for machine metrics, written in Go with pluggable metric collectors. Collectors There is varying support for collectors on each operating system. The tables below list all existing collectors and the supported systems. Which collectors are used is controlled by the --collectors.enabled flag. Enabled by default Name Description OS conntrack Shows conntrack statistics (does nothing if no /proc/sys/net/netfilter/ present). Linux cpu Exposes CPU statistics FreeBSD diskstats Exposes disk I/O statistics from /proc/diskstats. Linux entropy Exposes available entropy. Linux filefd Exposes file descriptor statistics from /proc/sys/fs/file-nr. Linux filesystem Exposes filesystem statistics, such as disk space used. FreeBSD, Dragonfly, Linux, OpenBSD loadavg Exposes load average. Darwin, Dragonfly, FreeBSD, Linux, NetBSD, OpenBSD, Solaris mdadm Exposes statistics about devices in /proc/mdstat (does nothing if no /proc/mdstat present). Linux meminfo Exposes memory statistics. Dragonfly, FreeBSD, Linux netdev Exposes network interface statistics such as bytes transferred. Dragonfly, FreeBSD, Linux, OpenBSD netstat Exposes network statistics from /proc/net/netstat. This is the same information as netstat -s. Linux stat Exposes various statistics from /proc/stat. This includes CPU usage, boot time, forks and interrupts. Linux textfile Exposes statistics read from local disk. The --collector.textfile.directory flag must be set. any time Exposes the current system time. any vmstat Exposes statistics from /proc/vmstat. Linux Disabled by default Name Description OS bonding Exposes the number of configured and active slaves of Linux bonding interfaces. Linux devstat Exposes device statistics FreeBSD gmond Exposes statistics from Ganglia. any interrupts Exposes detailed interrupts statistics. Linux, OpenBSD ipvs Exposes IPVS status from /proc/net/ip_vs and stats from /proc/net/ip_vs_stats. Linux ksmd Exposes kernel and system statistics from /sys/kernel/mm/ksm. Linux logind Exposes session counts from logind. Linux megacli Exposes RAID statistics from MegaCLI. Linux meminfo_numa Exposes memory statistics from /proc/meminfo_numa. Linux ntp Exposes time drift from an NTP server. any runit Exposes service status from runit. any supervisord Exposes service status from supervisord. any systemd Exposes service and system status from systemd. Linux tcpstat Exposes TCP connection status information from /proc/net/tcp and /proc/net/tcp6. (Warning: the current version has potential performance issues in high load situations.) Linux Textfile Collector The textfile collector is similar to the Pushgateway, in that it allows exporting of statistics from batch jobs. It can also be used to export static metrics, such as what role a machine has. The Pushgateway should be used for service-level metrics. The textfile module is for metrics that are tied to a machine. To use it, set the --collector.textfile.directory flag on the Node exporter. The collector will parse all files in that directory matching the glob *.prom using the text format.
RPMPackage prometheus-iperf-exporter-0.1.3-1.lbn36.x86_64
iPerf3 exporter is configured via command-line flags. To view all available command-line flags, run iperf3_exporter -h. The timeout of each probe is automatically determined from the scrape_timeout in the Prometheus config. This can be also be limited by the iperf3.timeout command-line flag. If neither is specified, it defaults to 30 seconds. Prometheus Configuration The iPerf3 exporter needs to be passed the target as a parameter, this can be done with relabelling. Optional: pass the port that the target iperf3 server is lisenting on as the "port" parameter. Example config: scrape_configs: - job_name: 'iperf3' metrics_path: /probe static_configs: - targets: - foo.server - bar.server params: port: ['5201'] relabel_configs: - source_labels: [__address__] target_label: __param_target - source_labels: [__param_target] target_label: instance - target_label: __address__ replacement: 127.0.0.1:5201 # The iPerf3 exporter's real hostname:port.
RPMPackage prometheus-flower-exporter-1.0.0-1.lbn36.noarch
Exporter for Celery/Flower metrics, inspired by https://github.com/vooydzig/flower-prometheus-exporter