QuestDB Kafka connector
QuestDB ships a QuestDB Kafka connector for fast ingestion from Kafka into QuestDB.
This page has the following main sections:
- A QuestDB Kafka connector integration guide
- A configuration manual for the connector
- FAQ
#
Integration guideThis guide shows the steps to use the QuestDB Kafka connector to read JSON data from Kafka topics and write them as rows into a QuestDB table.
#
PrerequisitesYou will need the following:
- Kafka
- A running QuestDB instance
#
Configure KafkaBefore starting Kafka, the following steps must be completed:
- Download the connector file.
The Apache Kafka distribution contains the Kafka Connect framework, but the
QuestDB-specific components need to be downloaded from
the QuestDB Kafka connector GH page,
under the zip archive named kafka-questdb-connector-<version>-bin.zip
.
- Copy the file to the Kafka
libs
directory.
Once downloaded, unzip the contents of the archive and copy the required .jar
files to the Kafka libs
directory:
tip
You can automate downloading the latest connector package by running this command:
- Set the configuration file.
A configuration file /path/to/kafka/config/questdb-connector.properties
must
be created for Kafka Connect in the standalone mode. The host and port of the
running QuestDB server must be defined. A topic can be specified under the
topics={mytopic}
key.
The example below creates a configuration file. It assumes a running QuestDB
server on the ILP default port, 9009
, creates a reader from a Kafka topic,
example-topic
, and writes into a QuestDB table, example_table
:
#
Start KafkaThe commands listed in this section must be run from the Kafka home directory and in the order shown below.
- Start the Kafka Zookeeper used to coordinate the server:
- Start a Kafka server:
- Start the QuestDB Kafka connector:
#
Publish messagesMessages can be published via the console producer script:
A greater-than symbol, >
, indicates that a message can be published to the
example topic from the interactive session. Paste the following minified JSON as
a single line to publish the message and create the table example-topic
in the
QuestDB instance:
#
Verify the integrationTo verify that the data has been ingested into the example-topic
table, the
following request to QuestDB's /exp
REST API endpoint can be made to export
the table contents via the curl command:
The expected response based on the example JSON message published above will be similar to the following:
If you can see the expected result then congratulations, you have successfully created and executed your first Kafka to QuestDB pipeline! 🎉
#
Additional sample projectsYou can find additional sample projects on the QuestDB Kafka connector Github project page. It includes a sample integration with Debezium for Change Data Capture from PostgreSQL.
#
Configuration manualThis section lists configuration options as well as further information about the Kafka Connect connector.
#
Configuration OptionsThe connector supports the following configuration options:
Name | Type | Example | Default | Meaning |
---|---|---|---|---|
topics | string | orders | N/A | Topics to read from |
key.converter | string | org.apache.kafka.connect.storage.StringConverter | N/A | Converter for keys stored in Kafka |
value.converter | string | org.apache.kafka.connect.json.JsonConverter | N/A | Converter for values stored in Kafka |
host | string | localhost:9009 | N/A | Host and port where QuestDB server is running |
table | string | my_table | Same as Topic name | Target table in QuestDB |
key.prefix | string | from_key | key | Prefix for key fields |
value.prefix | string | from_value | N/A | Prefix for value fields |
skip.unsupported.types | boolean | false | false | Skip unsupported types |
timestamp.field.name | string | pickup_time | N/A | Designated timestamp field name |
timestamp.units | string | micros | auto | Designated timestamp field units |
include.key | boolean | false | true | Include message key in target table |
symbols | string | instrument,stock | N/A | Comma separated list of columns that should be symbol type |
doubles | string | volume,price | N/A | Comma separated list of columns that should be double type |
username | string | user1 | admin | User name for QuestDB. Used only when token is non-empty |
token | string | QgHCOyq35D5HocCMrUGJinEsjEscJlC | N/A | Token for QuestDB authentication |
tls | boolean | true | false | Use TLS for QuestDB connection |
#
How does the connector work?The connector reads data from Kafka topics and writes it to QuestDB tables via ILP. The connector converts each field in the Kafka message to a column in the QuestDB table. Structures and maps are flatted into columns.
Example: Consider the following Kafka message:
The connector will create a table with the following columns:
firstname string | lastname string | age long | address_street string | address_city string |
---|---|---|---|---|
John | Doe | 30 | Main Street | New York |
#
Supported serialization formatsThe connector does not deserialize data independently. It relies on Kafka
Connect converters. The connector has been tested predominantly with JSON, but
it should work with any converter, including Avro. Converters can be configured
using key.converter
and value.converter
options, both are included in the
Configuration options table above.
#
Designated timestampsThe connector supports
designated timestamps.
If the message contains a timestamp field, the connector can use it as a
timestamp for the row. The field name must be configured using the
timestamp.field.name
option. The field must either be an integer or a
timestamp. When the field is set to an integer, the connector will autodetect
its units. This works for timestamps after 04/26/1970, 5:46:40 PM. The units can
also be configured explicitly using the timestamp.units
configuration, which
supports the following values:
nanos
micros
millis
auto
(default)
#
SymbolQuestDB supports a special type called
symbol. Use the symbols
configuration option to specify which columns should be created as the symbol
type.
#
Numeric type inference for floating point typeWhen a configured Kafka Connect deserializer provides a schema, the connector
uses it to determine column types. If a schema is unavailable, the connector
infers the type from the value. This might produce unexpected results for
floating point numbers, which may be interpreted as long
initially and
generates an error.
Consider this example:
Kafka Connect JSON converter deserializes the volume
field as a long
value.
The connector sends it to the QuestDB server as a long
value. If the target
table does not have a column volume
, the database creates a long
column. If
the next message contains a floating point value for the volume
field, the
connector sends it to QuestDB as a double
value. This causes an error because
the existing column volume
is of type long
.
To avoid this problem, the connector can be configured to send selected numeric
columns as double
regardless of the actual initial input value. Use the
doubles
configuration option to specify which columns should the connector
always send as the double
type.
#
Target table considerationsWhen a target table does not exist in QuestDB, it will be created automatically. This is the recommended approach for development and testing.
In production, it's recommended to use the SQL CREATE TABLE keyword, because it gives you more control over the table schema, allowing per-table partitioning, creating indexes, etc.
#
FAQDoes this connector work with Schema Registry?
The Connector works independently of the serialization strategy used. It relies
on Kafka Connect converters to deserialize data. Converters can be configured
using key.converter
and value.converter
options, see the configuration
section above.
I'm getting this error: "org.apache.kafka.connect.errors.DataException: JsonConverter with schemas.enable requires 'schema' and 'payload' fields and may not contain additional fields. If you are trying to deserialize plain JSON data, set schemas.enable=false in your converter configuration."
This error means that the connector is trying to deserialize data using a
converter that expects a schema. The connector does not require schemas, so you
need to configure the converter to not expect a schema. For example, if you are
using a JSON converter, you need to set value.converter.schemas.enable=false
or key.converter.schemas.enable=false
in the connector configuration.
Does this connector work with Debezium?
Yes, it's been tested with Debezium as a source and a
sample project
is available. Bear in mind that QuestDB is meant to be used as an append-only
database; hence, updates should be translated as new inserts. The connector
supports Debezium's ExtractNewRecordState
transformation to extract the new
state of the record. The transformation by default drops DELETE events, so there
is no need to handle them explicitly.
QuestDB is a time series database, how does it fit into Change Data Capture via Debezium?
QuestDB works with Debezium just great! This is the recommended pattern: Transactional applications use a relational database to store the current state of the data. QuestDB is used to store the history of changes. Example: Imagine you have a PostgreSQL table with the most recent stock prices. Whenever a stock price changes, an application updates the PostgreSQL table. Debezium captures each UPDATE/INSERT and pushes it as an event to Kafka. Kafka Connect QuestDB connector reads the events and inserts them into QuestDB. In this way, PostgreSQL will have the most recent stock prices and QuestDB will have the history of changes. You can use QuestDB to build a dashboard with the most recent stock prices and a chart with the history of changes.
How I can select which fields to include in the target table?
Use the ReplaceField transformation to remove unwanted fields. For example, if
you want to remove the address
field, you can use the following configuration:
See ReplaceField documentation for more details.
I need to run Kafka Connect on Java 8, but the connector says it requires Java 11. What should I do?
The Kafka Connect-specific part of the connectors works with Java 8. The
requirement for Java 11 is coming from QuestDB client itself. The zip archive
contains 2 JARs: questdb-kafka-connector-VERSION.jar
and
questdb-VERSION.jar
. You can replace the latter with
questdb-VERSION-jdk8.jar
from the
Maven central.
Please note that this setup is not officially supported, and you may encounter
issues. If you do, please report them to us.