Capacity planning
Capacity planning should be considered as part of the requirements of deploying QuestDB to forecast CPU, memory, network capacity, and a combination of these elements, depending on the expected demands of the system. This page describes configuring these system resources with example scenarios that align with both edge cases and common setup configurations.
Most of the configuration settings referred to below except for OS settings are
configured in QuestDB by either a server.conf
configuration file or as
environment variables. For more details on applying configuration settings in
QuestDB, refer to the configuration page.
To monitor various metrics of the QuestDB instances, refer to the Prometheus monitoring page or the Health monitoring page.
#
Storage and filesystemThe following sections describe aspects to consider regarding the storage of data and file systems.
#
Supported filesystemQuestDB officially supports the following filesystems:
- EXT4
- APFS
- NTFS
- OVERLAYFS (used by Docker)
Other file systems supporting mmap feature may work with QuestDB but they should not be used in production, as QuestDB does not run tests on them.
When an unsupported file system is used, QuestDB logs show the following warning:
caution
Users can't use NFS or similar distributed filesystems directly with a QuestDB database.
#
Write amplificationWhen ingesting out-of-order data, high disk write rate combined with high write amplification may slow down the performance.
For data ingestion over PGWire, or as a further step for ILP ingestion, smaller table partitions maybe reduce the write amplification. This applies to tables with partition directories exceeding a few hundred MBs on disk. For example, partition by day can be reduced to by hour, partition by month to by day, and so on.
note
- In QuestDB the write amplification is calculated by the
metrics:
questdb_physically_written_rows_total
/questdb_committed_rows_total
. - Partitions are defined when a table is created. Refer to CREATE TABLE for more information.
#
CPU and RAM configurationThis section describes configuration strategies based on the forecast behavior of the database.
#
RAM sizeWe recommend having at least 8GB of RAM for basic workloads and 32GB for more advanced ones.
For relatively small datasets, typically a few to a few dozen GB, if the need for reads is high, performance can benefit from maximizing the use of the OS page cache. Users may consider increasing available RAM to improve the speed of read operations.
#
Memory page size configurationFor frequent out-of-order (O3) writes over high number of columns/tables, the
performance may be impacted by the size of the memory page being too big as this
increases the demand for RAM. The memory page, cairo.o3.column.memory.size
, is
set to 8M by default. This means that the table writer uses 16MB (2x8MB) RAM per
each column when it receives O3 writes. Decreasing the value in the interval of
[128K, 8M] based on the number of columns used may improve O3 write performance.
#
CPU coresBy default, QuestDB attempts to use all available CPU cores. The guide on shared worker configuration details how to change the default setting. Assuming that the disk does not have bottleneck for operations, the throughput of read-only queries scales proportionally with the number of available cores. As a result, a machine with more cores will provide better query performance.
#
Shared workersIn QuestDB, there are worker pools which can help separate CPU-load between sub-systems.
caution
In case if you are configuring thread pool sizes manually, the total number of threads to be used by QuestDB should not exceed the number of available CPU cores.
The number of worker threads shared across the application can be configured as well as affinity to pin processes to specific CPUs by ID. Shared worker threads service SQL execution subsystems and, in the default configuration, every other subsystem. More information on these settings can be found on the shared worker configuration page.
QuestDB will allocate CPU resources differently depending on how many CPU cores are available. This default can be overridden via configuration. We recommend at least 4 cores for basic workloads and 16 for advanced ones.
#
8 CPU cores or lessQuestDB will configure a shared worker pool to handle everything except the InfluxDB line protocol (ILP) writer which gets a dedicated CPU core. The worker count is calculated as follows:
Minimal size of the shared worker pool is 2, even on a single-core machine.
#
16 CPU cores or lessILP I/O Worker pool is configured to use 2 CPU cores to speed up ingestion and the ILP Writer is using 1 core. The shared worker pool is handling everything else and is configured using this formula:
For example, with 16 cores, the shared pool will have 12 threads:
#
17 CPU cores and moreThe ILP I/O Worker pool is configured to use 6 CPU cores to speed up ingestion and the ILP Writer is using 1 core. The shared worker pool is handling everything else and is configured using this formula:
For example, with 32 cores, the shared pool will have 23 threads:
#
Writer page sizeThe default page size for writers is 16MB. In cases where there are a large
number of small tables, using 16MB to write a maximum of 1MB of data, for
example, is a waste of OS resources. To changes the default value, set the
cairo.writer.data.append.page.size
value in server.conf
:
#
InfluxDB over TCPWe have a documentation page dedicated to capacity planning for ILP ingestion.
#
InfluxDB over UDPnote
The UDP receiver is deprecated since QuestDB version 6.5.2. We recommend the TCP receiver instead.
Given a single client sending data to QuestDB via InfluxDB line protocol over UDP, the following configuration can be applied which dedicates a thread for a UDP writer and specifies a CPU core by ID:
#
PostgresGiven clients sending data to QuestDB via Postgres interface, the following
configuration can be applied which sets a dedicated worker and pins it with
affinity
to a CPU by core ID:
#
Network ConfigurationFor InfluxDB line, PGWire and HTTP protocols, there are a set of configuration
settings relating to the number of clients that may connect, the internal I/O
capacity and connection timeout settings. These settings are configured in the
server.conf
file in the format:
Where <protocol>
is one of:
http
- HTTP connectionspg
- PGWire protocolline.tcp
- InfluxDB line protocol over TCP
And <config>
is one of the following settings:
key | description |
---|---|
limit | The number of simultaneous connections to the server. This value is intended to control server memory consumption. |
timeout | Connection idle timeout in milliseconds. Connections are closed by the server when this timeout lapses. |
hint | Applicable only for Windows, where TCP backlog limit is hit. For example Windows 10 allows max of 200 connection. Even if limit is set higher, without hint=true it won't be possible to connect more than 200 connection. |
sndbuf | Maximum send buffer size on each TCP socket. If value is -1 socket send buffer remains unchanged from OS default. |
rcvbuf | Maximum receive buffer size on each TCP socket. If value is -1, the socket receive buffer remains unchanged from OS default. |
For example, this is configuration for Linux with relatively low number of concurrent connections:
Let's assume you would like to configure InfluxDB line protocol for large number of concurrent connection on Windows:
For reference on the defaults of the http
and pg
protocols, refer to the
server configuration page.
#
Pooled connectionConnection pooling should be used for any production-ready use of PGWire or ILP over TCP.
The maximum number of pooled connections is configurable,
(pg.connection.pool.capacity
for PGWire and
(line.tcp.connection.pool.capacity
for ILP over TCP. The default number of
connections for both interfaces is 64. Users should avoid using too many
connections.
#
OS configurationThis section describes approaches for changing system settings on the host QuestDB is running on when system limits are reached due to maximum open files or virtual memory areas. QuestDB passes operating system errors to its logs unchanged and as such, changing the following system settings should only be done in response to such OS errors.
#
Maximum open filesThe storage model of QuestDB has the benefit that most data structures relate
closely to the file system, with columnar data being stored in its own .d
file
per partition. In edge cases with extremely large tables, frequent out-of-order
ingestion, or high number of table partitions, the number of open files may hit
a user or system-wide maximum limit and can cause unpredictable behavior.
The following commands allow for checking current user and system limits for maximum number of open files:
Setting system-wide open file limit:
To increase this setting and have the configuration persistent, the limit on the
number of concurrently open files can be changed in /etc/sysctl.conf
:
To confirm that this value has been correctly configured, reload sysctl
and
check the current value:
#
Max virtual memory areas limitIf the host machine has insufficient limits of map areas, this may result in out
of memory exceptions. To increase this value and have the configuration
persistent, mapped memory area limits can be changed in /etc/sysctl.conf
:
Each mapped area needs kernel memory, and it's recommended to have around 128 bytes available per 1 map count.