Personal tools
Skip to content. | Skip to navigation
Development tools for Alloy; linting and component listing
A utility for managing Jsonnet dashboards against the Grafana API.
Rendering images requires a lot of memory, mainly because Grafana creates browser instances in the background for the actual rendering. We recommend a minimum of 16GB of free memory on the system rendering images. Rendering multiple images in parallel requires an even bigger memory footprint. You can use the remote rendering service in order to render images on a remote system, so your local system resources are not affected. Configuration ------------- Install this package; and edit the rendering section in your grafana config: [rendering] server_url = http://localhost:8081/render callback_url = http://localhost:3000/
k6 is a modern load testing tool, building on our years of experience in the load and performance testing industry. It provides a clean, approachable scripting API, local and cloud execution, and flexible configuration.
Kiosk Utility for Grafana.
Like Prometheus, but for logs.
Loki: like Prometheus, but for logs. Loki is a horizontally-scalable, highly-available, multi-tenant log aggregation system inspired by Prometheus. It is designed to be very cost effective and easy to operate. It does not index the contents of the logs, but rather a set of labels for each log stream. Compared to other log aggregation systems, Loki: does not do full text indexing on logs. By storing compressed, unstructured logs and only indexing metadata, Loki is simpler to operate and cheaper to run. indexes and groups log streams using the same labels you’re already using with Prometheus, enabling you to seamlessly switch between metrics and logs using the same labels that you’re already using with Prometheus. is an especially good fit for storing Kubernetes Pod logs. Metadata such as Pod labels is automatically scraped and indexed. has native support in Grafana (needs Grafana v6.0). A Loki-based logging stack consists of 3 components: promtail is the agent, responsible for gathering logs and sending them to Loki. loki is the main server, responsible for storing logs and processing queries. Grafana for querying and displaying the logs. Loki is like Prometheus, but for logs: we prefer a multidimensional label-based approach to indexing, and want a single-binary, easy to operate system with no dependencies. Loki differs from Prometheus by focusing on logs instead of metrics, and delivering logs via push, instead of pull.
If you are running on Grafana Cloud, use: $ export GRAFANA_ADDR=https://logs-us-west1.grafana.net $ export GRAFANA_USERNAME=<username> $ export GRAFANA_PASSWORD=<password> Otherwise you can point LogCLI to a local instance directly without needing a username and password: $ export GRAFANA_ADDR=http://localhost:3100 Note: If you are running Loki behind a proxy server and you have authentication configured, you will also have to pass in GRAFANA_USERNAME and GRAFANA_PASSWORD accordingly. $ logcli labels job https://logs-dev-ops-tools1.grafana.net/api/prom/label/job/values cortex-ops/consul cortex-ops/cortex-gw