Compression & PostgreSQL columnstore indexes

Posted in How to, PostgreSQL

A few weeks ago, we wrote an introduction to columnstore indexing in PostgreSQL. Columnstore indexes are a new feature of the Swarm64 DA 5.0 PostgreSQL extension, which accelerates PostgreSQL query performance by greatly reducing I/O and time required to scan and filter table data.

The columnstore index compression parameter

One of the parameters of the columnstore index is a compression setting. It controls which compression algorithm is used and how aggressively Swarm64 DA compresses the data in the columnstore index. We currently support two algorithms: zstd and lz4. For example:

CREATE INDEX my_columnstore_index
ON my_table
USING columnstore(col_a, col_b, col_c)
WITH(compression_type=’zstd’, compression_level=5);

In the example above, we use the default settings. That is, when you create a columnstore index without using the WITH clause, it will use these settings.

The compression_level parameter is a numeric setting. It ranges from 1 to 9, where 1 is minimal compression and 9 is the most aggressive (maximum) compression.

What’s the trade off?

TL;DR – as a rule of thumb you can assume the following:

  • zstd reaches higher compression levels in comparison to lz4.
  • zstd utilizes the CPU more for compression and decompression in comparison to lz4.
  • Greater compression levels can reduce I/O transfer sizes. But, typically, choosing the mid-compression-level is a good choice.

In order to help you tune the compression setting for your columnstore indexes, we ran a little test that shows how low vs. medium vs. high compression settings affect the following:

  • Overall index creation and query performance benchmark run time
  • Index footprint and compression ratio
  • Disk transfer and CPU utilization
Get a free Swarm64 DA developer license

The test – TPC-H 1TB

Since we have a 1TB (SF1000) TPC-H set up in our lab, we decided to run this benchmark with variations on the columnstore indexes. We varied the compression algorithm and for each algorithm we tested with compression levels 1, 5, and 9.

Housekeeping first. We measured all of this on an HPE ProLiant Gen9 with 2x E5-2697 v3, 256GB of RAM and 14x HPE 800GB Mixed Use SAS SSDs.

The TPC-H benchmark consists of 8 tables in total where the two biggest ones, “orders” and “lineitem” contain 1.5 billion and 6 billion rows, respectively. The orders table is as follows:

CREATE TABLE orders (
    o_orderkey bigint NOT NULL
  , o_custkey int NOT NULL
  , o_orderstatus "char" NOT NULL
  , o_totalprice double precision NOT NULL
  , o_orderdate date NOT NULL
  , o_orderpriority character varying(15) NOT NULL
  , o_clerk character varying(15) NOT NULL
  , o_shippriority int NOT NULL
  , o_comment character varying(79) NOT NULL
);

The columnstore index for each of these tables is defined by creating it for the whole table and all its columns. For example, the columnstore index for the orders table looks like this: 

CREATE INDEX orders_cache ON orders USING columnstore (
    o_orderkey
  , o_custkey
  , o_orderstatus
  , o_totalprice
  , o_orderdate
  , o_orderpriority
  , o_clerk
  , o_shippriority
  , o_comment
);

Index statistics

We redefined the columnstore index six times in order to compare the effects of the two different compression algorithms versus the three different compression levels (1 (low), 5 (medium), and 9 (high)). For reference: the raw table size is 1,237GB (1.2TB).

Compression Type – Levelzstd – 1zstd – 5zstd – 9lz4 – 1lz4 – 5lz4 – 9
Index creation time [s]1031.191737.263519.481503.851835.733069.06
Index size [GB]274249234358338330
Compression Factor4.55.05.33.53.73.8

As you can see, zstd compresses quite well and reaches a compression factor of up to 5.3x. lz4 provides quite lower values with a maximum compression factor of 3.8x. However, also the index creation times vary quite a bit and tend to shoot through the roof when it comes to achieving maximum compression. Of course, as usual, your mileage may vary. But, using the default (middle) compression level of 5 is perfectly valid. It typically provides the best possible balance between index creation time and resulting compression factor.

Benchmark statistics

For the six index variations above, we ran the complete TPC-H benchmark and collected key metrics. These include total benchmark runtime (to execute all 22 TPC-H queries), average CPU load, and transferred data based on the average I/O throughput multiplied by benchmark runtime. Here are the results:

CompressionPG nativezstd – 1zstd – 5zstd – 9lz4 – 1lz4 – 5lz4 – 9
Total Runtime [seconds]8779169717061682155615351532
Transferred data [TB]15.61.21.11.11.51.51.4

Let’s look at the obvious thing first: when using zstd for compression, less data is transferred. When comparing the middle compression level, one can save 1.4x the data and even a whopping 14x compared to PostgreSQL. This is especially important, when your workload runs on cloud and the IO is not very strong: the less data one has to transfer, the better.

However, the really big surprise is the total runtime of the benchmark. In this case with this machine configuration, CPU resources are more “valuable” on the query itself. Or, in other words: lz4 is cheaper to decompress and thus CPU can be utilized to crunch numbers faster. Also, it became clear, that the compression level itself does not influence the runtime much. If your storage subsystem is speedy enough, it does not matter whether you have to transfer a couple of GBs more.

Which compression algorithm and level should I use?

Let us rephrase the question and rather ask: do you want to save CPU or I/O? With this question in mind, the compression algorithm choice looks like as follows.

Pick lz4, if you want to reduce CPU load or if you are short on cores and want to utilize this resource for the query itself. This is a typical decision when you have your database on-premises or are not paying (much) for storage and I/O transfer. A couple of GBs extra for storage don’t hurt. Likewise, a couple of hundred extra GBs transferred doesn’t matter.

Choose zstd, if you want to reduce I/O storage and transfer (cost). In our case, it provided an additional reduction of 1.4x for storage (lz4: 330GB, zstd: 234GB) and a 1.4x reduction in data transfer. However, keep in mind that this comes at the cost of extra CPU cycles which you will lose when processing queries.

Regarding compression levels: make it simple and go with the middle, that is level 5 is perfectly fine for the majority of cases.

Summary

The choice of the compression algorithm is a useful tuning lever for the powerful new columnstore index. By contrast, the choice of the compression level does not make as big a difference in our test. Our benchmarking experiment shows, that the compression algorithm can have the following effects:

For the TPC-H case, we can measure compression factors on individual tables ranging from 2.4x to 8.4x. Of course, depending on your case, you can possibly experience even higher compression factors.

We measured a runtime difference of 1.1x comparing lz4 to zstd. This is related to the overhead for decompression which zstd introduces on our test system. On systems with lower IO bandwidth, the ratio likely looks much different favoring zstd.

Long index creation runtimes if the compression level is very high. While, at the same time, the query runtime does not benefit much.

Make your choice on lz4 or zstd easy – pick CPU or I/O, depending on what you need to optimize more. The rule of thumb is: lz4 to save CPU, zstd to save I/O.

Your mileage might vary. We invite you to give Swarm64 DA a try and give us your feedback.