monotux.tech


Telegraf, MQTT & InfluxDB

MQTT, Telegraf, InfluxDB

I’m working on exposing some temperature/humidity readings out on the internet, and the first step in this was collecting MQTT metrics and putting it in some kind of time series database.

This will describe the part where we consume MQTT messages and store them in a database, and we’ll be using Telegraf consuming MQTT & InfluxDB for storing data.

The components involved in this solution.

Below is out of scope for this entry:

  • Installing, setting up and configuring InfluxDB (you’ll need an API token with write permission to the bucket you want to store data in)
  • Installing, setting up and configuring the MQTT source (I will show the relevant bridge configuration, tho)
  • Installing, setting up and updating the rrdtool graph (I have to figure this one out myself)

In my case, I have a few sensors in the family cabin which values are published to a MQTT broker. From this broker, I’ve setup a bridge to the machine running Telegraf/InfluxDB which also has a local mosquitto instance. I will then have telegraf subscribe to all bridged topics, parse these and insert them into my InfluxDB instance.

Telegraf configuration #

With everything above out of the way, configuring telegraf is relatively straight-forward!

In this case, as I’m only interested in temperature & humidity readings, I’ve stripped down the default configuration and just added an InfluxDB output definition, and a MQTT consumer input definition:

[global_tags]
[agent]
  interval = "10s"
  round_interval = true
  metric_batch_size = 1000
  metric_buffer_limit = 10000
  collection_jitter = "0s"
  flush_interval = "10s"
  flush_jitter = "0s"
  precision = "0s"

[[outputs.influxdb_v2]]
  urls = ["http://localhost:8086"]
  token = "YOURTOKENGOESHERE"
  organization = "foobar"
  bucket = "baz"

[[inputs.mqtt_consumer]]
  servers = ["tcp://127.0.0.1:1883"]
  topics = [
    "home/OpenMQTTGateway_lilygo_rtl_433_ESP/RTL_433toMQTT/#"
  ]

  topic_tag = "topic"
  qos = 0
  data_format = "json"

  username = "telegraf"
  password = "notmyrealpassword"

  tag_keys = [ "model", "id", "channel" ]

  json_string_fields = []

  [[inputs.mqtt_consumer.topic_parsing]]
    topic = "+/+/+/+/+/+"
    tags = "site/_/_/model/channel/device_id"

The only tricky part was understanding the mqtt_consumer configuration. I was lucky since the data structure I got from my sensors is just a flat object, which the data format json can handle without issues.

For context, this is a prettified payload from my sensors:

{
    "model": "Bresser-3CH",
    "id": 130,
    "channel": 1,
    "battery_ok": 1,
    "temperature_C": 6.777781,
    "humidity": 22,
    "mic": "CHECKSUM",
    "protocol": "Bresser Thermo-/Hygro-Sensor 3CH",
    "rssi": -59,
    "duration": 1839996
}

mosquitto bridges #

As a bonus, this is how I’ve setup the bridging in mosquitto. We have two systems, producer (family cabin) and consumer (cloud VPS), and data is pushed from producer to consumer like described above.

You need to have ACLs and three user accounts ready.

  • producer side – one account for connecting to the local broker1, and one account on the consumer system
  • consumer side – one account for the producer to connect to our side, and one local account for telegraf to use2

My ACLs looks something like this:

# producer side
user producerlocal
topic read home/#

# consumer side
user producerremote
topic readwrite home/#

user telegraf
topic read home/#

Parts of the mosquitto configuration on the consumer system, which requires a client certificate to connect:

listener 8883
cafile /etc/mosquitto/certs/ca.crt
certfile /etc/mosquitto/certs/consumer.crt
dhparamfile /etc/mosquitto/dhparam.pem
keyfile /etc/mosquitto/certs/consumer.key
require_certificate true

Parts of the mosquitto configuration on the producer system:

connection consumer
address consumer.example.com:8883
bridge_cafile /etc/mosquitto/certs/ca.crt
bridge_certfile /etc/mosquitto/certs/producer.crt
bridge_keyfile /etc/mosquitto/certs/producer.key
keepalive_interval 59
local_password notmyrealpassword
local_username producerlocal
remote_password notymyrealpasswordeither
remote_username producerremote
topic # out 2 home/ home/

To generate the certificate structure used above:

# Create a local certificate authority
openssl req -new -x509 -days 3650 -extensions v3_ca -keyout ca.key -out ca.crt

# Create a key file, a csr and sign the csr with the ca you've just created

# producer
openssl genrsa -out producer.key 2048
openssl req -out producer.csr -key producer.key -new
openssl x509 -req -in producer.csr -CA ../ca/ca.crt -CAkey ../ca/ca.key -CAcreateserial -out producer.crt -days 3650

# consumer
openssl genrsa -out consumer.key 2048
openssl req -out consumer.csr -key consumer.key -new
openssl x509 -req -in consumer.csr -CA ../ca/ca.crt -CAkey ../ca/ca.key -CAcreateserial -out consumer.crt -days 3650

Then copy producer.key, producer.crt and ca.crt to the producer system, and consumer.key / consumer.crt / ca.crt to consumer.

Next up is understanding how to use rrdtool.


  1. This account is used by the bridge, which will just pretend to be a client and connect to the local broker, read messages and then publish them to the remote side. ↩︎

  2. This account will also pretend to be a normal client, connect to the local (consumer) broker, read the messages and write these to InfluxDB ↩︎