Operations 16 min read

Turn JMeter Test Results into Real‑Time Grafana Dashboards with InfluxDB & Prometheus

This article walks through the most common performance‑monitoring stack—JMeter, node_exporter, Prometheus, InfluxDB, and Grafana—explaining how to configure backend listeners, send metrics, store them, and build real‑time dashboards while highlighting code snippets and query examples.

Efficient Ops
Efficient Ops
Efficient Ops
Turn JMeter Test Results into Real‑Time Grafana Dashboards with InfluxDB & Prometheus

JMeter + InfluxDB + Grafana Data Display Logic

Performance monitoring typically covers OS, application servers, middleware, queues, caches, databases, networks, front‑end, load balancers, web servers, storage, and code. This article focuses on the most frequently used points and explains the logic behind visualising JMeter data with Grafana, InfluxDB, and Exporters.

When running a JMeter test, results are usually viewed in the console, via plugins, or by generating HTML reports. However, these approaches waste time, are impractical for high concurrency, consume excessive memory for long runs, and make post‑run analysis cumbersome.

Using JMeter's Backend Listener to send metrics asynchronously to InfluxDB (or Graphite) solves these problems. The listener is supported from JMeter 2.13 (Graphite) and JMeter 3.3 (InfluxDB). The metrics include TPS, response time, thread count, and error rate, and are stored in InfluxDB for later Grafana visualisation.

JMeter + InfluxDB + Grafana architecture
JMeter + InfluxDB + Grafana architecture

In the architecture, JMeter sends data every 30 seconds (configurable via

#summariser.interval

) to InfluxDB, where two measurements—

events

and

jmeter

—store test‑level events and per‑transaction statistics. Grafana then queries these measurements to render real‑time charts for throughput (TPS) and 95th‑percentile response time.

<code>SELECT last("count") / $send_interval FROM "$measurement_name" WHERE ("transaction" =~ /^$transaction$/ AND "statut" = 'ok') AND $timeFilter GROUP BY time($__interval)</code>

The resulting Grafana dashboard shows total TPS, response‑time curves, and per‑transaction statistics without needing to keep HTML reports.

JMeter Backend Listener Configuration

The Backend Listener is added to the test plan and configured with the InfluxDB URL and an application name (used as the measurement tag).

JMeter Backend Listener settings
JMeter Backend Listener settings

Key Java code that adds metrics:

<code>private void addMetrics(String transaction, SamplerMetric metric) {
    // FOR ALL STATUS
    addMetric(transaction, metric.getTotal(), metric.getSentBytes(), metric.getReceivedBytes(), TAG_ALL,
        metric.getAllMean(), metric.getAllMinTime(), metric.getAllMaxTime(),
        allPercentiles.values(), metric::getAllPercentile);
    // FOR OK STATUS
    addMetric(transaction, metric.getSuccesses(), null, null, TAG_OK,
        metric.getOkMean(), metric.getOkMinTime(), metric.getOkMaxTime(),
        okPercentiles.values(), metric::getOkPercentile);
    // FOR KO STATUS
    addMetric(transaction, metric.getFailures(), null, null, TAG_KO,
        metric.getKoMean(), metric.getKoMinTime(), metric.getKoMaxTime(),
        koPercentiles.values(), metric::getKoPercentile);
    metric.getErrors().forEach((error, count) ->
        addErrorMetric(transaction, error.getResponseCode(), error.getResponseMessage(), count));
}
</code>

Sending the collected metrics to InfluxDB:

<code>@Override
public void writeAndSendMetrics() {
    if (!copyMetrics.isEmpty()) {
        try {
            if (httpRequest == null) {
                httpRequest = createRequest(url);
            }
            StringBuilder sb = new StringBuilder(copyMetrics.size() * 35);
            for (MetricTuple metric : copyMetrics) {
                sb.append(metric.measurement)
                  .append(metric.tag)
                  .append(" ")
                  .append(metric.field)
                  .append(" ")
                  .append(metric.timestamp + "000000")
                  .append("\n");
            }
            StringEntity entity = new StringEntity(sb.toString(), StandardCharsets.UTF_8);
            httpRequest.setEntity(entity);
            lastRequest = httpClient.execute(httpRequest, new FutureCallback<HttpResponse>() {
                @Override public void completed(final HttpResponse response) {
                    int code = response.getStatusLine().getStatusCode();
                    if (MetricUtils.isSuccessCode(code)) {
                        if (log.isDebugEnabled()) {
                            log.debug("Success, number of metrics written: {}", copyMetrics.size());
                        }
                    } else {
                        log.error("Error writing metrics to influxDB Url: {}, responseCode: {}, responseBody: {}", url, code, getBody(response));
                    }
                }
                @Override public void failed(final Exception ex) { log.error("failed to send data to influxDB server : {}", ex.getMessage()); }
                @Override public void cancelled() { log.warn("Request to influxDB server was cancelled"); }
            });
        } catch (Exception e) { log.error("Exception while sending metrics", e); }
    }
}
</code>

node_exporter + Prometheus + Grafana Data Display Logic

For system‑level monitoring, the stack uses node_exporter to expose OS metrics, Prometheus to scrape them, and Grafana to visualise.

node_exporter + Prometheus + Grafana architecture
node_exporter + Prometheus + Grafana architecture

node_exporter is started with a simple command, e.g.

./node_exporter --web.listen-address=:9200 &amp;

. Prometheus is configured by adding a job to

prometheus.yml

:

<code>- job_name: 's1'
  static_configs:
  - targets: ['172.17.211.143:9200']
</code>

Grafana is then pointed at the Prometheus data source and a node_exporter dashboard (e.g., ID 11074) is imported.

Grafana node_exporter dashboard
Grafana node_exporter dashboard

Example Prometheus query for CPU usage:

<code>avg(irate(node_cpu_seconds_total{instance=~"$node",mode="system"}[30m])) by (instance)
+ avg(irate(node_cpu_seconds_total{instance=~"$node",mode="user"}[30m])) by (instance)
+ avg(irate(node_cpu_seconds_total{instance=~"$node",mode="iowait"}[30m])) by (instance)
+ 1 - avg(irate(node_cpu_seconds_total{instance=~"$node",mode="idle"}[30m])) by (instance)
</code>

The query pulls the various CPU mode counters collected by node_exporter, mirroring the values you would see with

top

from

/proc

.

Summary

Understanding the data flow—from JMeter or node_exporter, through InfluxDB or Prometheus, to Grafana—helps performance engineers interpret metrics correctly, avoid blind reliance on visual tools, and make informed decisions during testing and analysis.

DevOpsPerformance MonitoringPrometheusJMeterInfluxDBGrafananode exporter
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.