Unlike general-purpose databases such as SQL Server or MySQL, InfluxDB is built specifically for time-series data—measurements tied to particular moments. It scales to handle large volumes and was created by InfluxData, a company located in the Bay Area.
In this article, we’ll explore how to write and query data in an InfluxDB database using Peakboard.
Setting up Influx
The easiest way to get started is by launching the official InfluxDB Docker image. The following command spins up a container in under a minute and persists its data in a Docker volume.
docker run -d
--name influxdb
-p 8086:8086
-v influxdb_data:/var/lib/influxdb2
-e DOCKER_INFLUXDB_INIT_MODE=setup
-e DOCKER_INFLUXDB_INIT_USERNAME=admin
-e DOCKER_INFLUXDB_INIT_PASSWORD=supersecret
influxdb:2
Once the container is running, InfluxDB listens on port 8086 at http://localhost:8086/
. In the web UI, first create an organization and then your initial bucket to store data.
Next, generate an API token for external reads and writes via the “API Tokens” section.
With these steps complete, you’re ready to send data.
Writing data
InfluxDB exposes a straightforward HTTP API for reads and writes. To insert data, we POST to /api/v2/write?org=LosPollosHermanos&bucket=DismantleBucket&precision=s
. The org
and bucket
parameters identify where the point will be stored, and precision=s
indicates that timestamps use second resolution.
The request body uses InfluxDB’s line protocol. In the example below the measurement is temperature
, we tag the sensor as lab
, and store the field value 26.3
.
temperature,sensor=lab value=26.3
In our sample Peakboard app the user can enter a value.
The next screenshot shows the Building Block behind the submit button. We use a placeholder text to inject the user’s value into the body of the HTTP call. Besides the body, we need to add two headers:
Content-Type
must be set totext/plain
Authorization
must be set toToken <YourInfluxAPIToken>
After sending the request you should see the submitted value in the Influx Data Explorer.
Query data
Querying data is just as straightforward. A single API call returns the results, but the response comes in a CSV format with multiple header lines that requires extra parsing. To avoid that hassle, use the InfluxDB Extension, which handles the formatting for you.
The screenshot below shows the extension in action. Provide the following parameters:
URL
needs to be filled with the URL of the query API call including the organization name, e.g.http://localhost:8086/api/v2/query?org=LosPollosHermanos
Token
is the API tokenFluxQuery
is the query string to describe the requested data
Our sample query is straightforward: read from DismantleBucket
, limit the range to the last two hours, filter for the temperature
measurement and its value
field, and then return the maximum value in that window.
from(bucket: "DismantleBucket")
|> range(start: -2h)
|> filter(fn: (r) => r._measurement == "temperature")
|> filter(fn: (r) => r._field == "value")
|> max()
The screenshot shows the result set. Besides the two timestamps (start and end of the query period), the actual value appears in the _value
column. Our sample outputs a single row, but additional fields or sensors would produce multiple rows.
Result
We’ve seen how easy it is to write to and read from InfluxDB. It scales to huge sizes and is simple to use, but it’s best reserved for time-based measurements. Data without a timestamp is better stored in a different database.