Monday, January 5, 2026

Using iperf3 and Prometheus for WAN link monitoring

I have multiple FreeBSD routers across my environments across the world each having its own WAN (Internet) connectivity and using WireGuard VPN to connect all into a private network. 

I would like to do

  • local monitoring of Internet connectivity on each router
  • centralized monitoring of Internet connectivity of each router in my datacenter 

The solution is pretty simple and I will describe it on this blog post.

Key ideas 

 Here are key ideas of the solution:

  • I have my own iperf3 server on the central location (datacenter), but other publicly available iperf3 servers can be used as well 
  • I developed the Bourne Shell script (iperf3_bandwidth_exporter.sh) which is using iperf3 to test Internet bandwidth between the remote router and iperf3 server in my datacenter. The script is running every 10 minutes (done by cron) and write the results into a node_exporter textfile
    • node_exporter textfile (/var/db/node_exporter/iperf3.prom
  • All router Node Exporter metrics are available on port :9100 of each remote router
  • I run Prometheus locally on every router on port :9090, which is scraping node_exporter data and keep data local for last 30 days (30 days retention)
  • I also run Prometheus centrally on datacenter, which is scraping node_exporter data remotely from all routers and keep it centrally in datacenter with retention 365 days
    • this is good enough fro my small environment 
    • it can be potentially improved by Prometheus Remote Write (push model) to centralized storage like Thanos, Cortex, VictoriaMetrics, or Mimir.
  • I have centralized Grafana in datacenter to visualize data from my centralized Prometheus.  

That's it. Simple, isn't it?

Solution Components

Everything is based on open-source. Source code of my iperf3_badwidth_exporter script is publicly available at GitHub repository iperf3_bandwidth_exporter.

The Solution was developed and tested on FreeBSD and is composed from

  • iperf3 Monitoring Script
  • Node Exporter
  • Prometheus
  • Grafana 

Let's dive into more details of each component.  

iperf3 Monitoring Script

iperf3 Monitoring Script (iperf3_bandwidth_exporter.sh) was developed and tested on FreeBSD. The intention of the script is to use iperf3 for Internet bandwidth monitoring and creating node_exporter file (/var/db/node_exporter/iperf3.prom) usable by Prometheus for longer data retention and visualization.

The monitoring script requires

  • /bin/sh - Bourne Shell (default shell in FreeBSD)
  • /usr/local/bin/iperf3 - perform network throughput tests
  • /bin/timeout - run a command with a time limit 
  • /usr/local/bin/jq - Command-line JSON processor
  • /usr/bin/bc - arbitrary-precision decimal arithmetic language and calculator

The script must be located at /usr/local/bin/iperf3_bandwidth_exporter.sh and by default, it is regularly run by cron daemon every 10 minutes and measures internet bandwidth for 30 seconds. These values can be change if needed.

Node Exporter

On FreeBSD, node_exporter is a lightweight system metrics exporter used with Prometheus that runs as a user-space daemon and exposes host-level metrics (such as CPU usage, memory, disk I/O, filesystems, network interfaces, load averages, and selected kernel data via sysctl) over an HTTP endpoint, typically on port 9100, so they can be scraped and stored by Prometheus for monitoring and alerting; it is available as a prebuilt package or port, integrates cleanly with the FreeBSD service framework (rc.d), and focuses on read-only metric collection without modifying system state, making it suitable for both servers and appliances.

On FreeBSD, the command ... 

sysrc node_exporter_textfile_dir="/var/db/node_exporter" 

... uses sysrc to permanently set an rc.conf variable that tells node_exporter where its textfile collector directory is located; specifically, it records the path /var/db/node_exporter in the system configuration so that when node_exporter starts via the rc.d system, it knows to read custom, pre-generated metric files (*.prom) from that directory and expose them alongside its built-in metrics, with the setting surviving reboots and service restarts on FreeBSD. 

The file /var/db/node_exporter/iperf3.prom is generated regularly by iperf3 monitoring script /usr/local/bin/iperf3_bandwidth_exporter.sh

Prometheus

Prometheus is an open-source monitoring and alerting system designed for reliability and scalability that collects time-series metrics from targets (such as servers, network devices, and applications) by periodically scraping HTTP endpoints (exporters), stores those metrics locally with efficient compression, and allows powerful querying and aggregation using its PromQL language; it is commonly used in modern infrastructure and cloud environments to observe system health, visualize trends (often via Grafana), and trigger alerts through an integrated alerting component when predefined conditions are met.

In our solution, we use Prometheus 

  • locally on each router, therefore it is not dependent on central solution
  • centrally on datacenter, therefore we have a single datasource for Grafana visualization tool 

Grafana

Grafana is an open-source platform for monitoring, visualization, and analytics. It lets you turn metrics, logs, and other time-series data into interactive dashboards with graphs, tables, and alerts.  Grafana itself does not store data. Instead, it connects to data sources. I use centralized Prometheus as my Grafana datasource.

Installation and Configuration

Detailed installation procedure is describe on GitHub README file.


 

No comments:

Post a Comment