overview

Application Modernization • July 27, 2023

QUANTIPHIng AlloyDB’s Columnar Engine

In our first blog on AlloyDB for PostgreSQL, we discussed what sets AlloyDB apart as a database service, and how businesses can reimagine their enterprise database modernization with AlloyDB. Today, we touch upon AlloyDB’s vectorized columnar execution engine, and through a series of tests, understand how it drives analytical acceleration and query performance. 

Row vs Column: Orientation Matters

Let’s assume that we need to look up information about a particular individual. To do this, it is optimal for the database to be organized by row orientation and store data in fixed-sized blocks. Row-oriented storage is optimal for access patterns that request information across an entire row. 

However, analytical queries need different access patterns. For analytical queries to be processed from a row-oriented database, every column of every row needs to be scanned, even though most of the data could be irrelevant to the query. AlloyDB facilitates the analysis of the values in a single column at a time, enabling faster responses to analytical queries. 

AlloyDB Columnar Engine

Google’s large-scale data analytics services offer fast and scalable query processing through columnar storage, cutting-edge query processing techniques, and hardware acceleration. AlloyDB delivers the best of Google’s technological advancements directly into a PostgreSQL-compatible operational database. AlloyDB blends a row-based format for transactional processing with a columnar scan and execution engine for analytical processing. 

The columnar engine is a state-of-the-art, vectorized query processing engine that efficiently processes large volumes of columnar data by optimally leveraging the system caches and vector processing commands offered by central processing units. Algorithmic optimizations accelerate query processing and the use of column-specific metadata accelerates scans and performs aggregation operations directly on the column without showcasing the results of a scan. AlloyDB’s hybrid execution brings together both column and row-oriented query processing techniques.

AlloyDB’s columnar engine effectively ascertains the data format and metadata for each column by evaluating both the content of the column and the type of query operations. The metadata is harnessed to encode the data values efficiently and accelerate query processing. Additionally, the columnar engine can leverage compression techniques to utilize memory and accelerate query processing optimally. 

Query Performance

Over the past few months, Quantiphi conducted a series of performance tests to evaluate AlloyDB’s accelerated query processing capabilities. To this end, we have considered a retail dataset with a focus on discounts and quantity KPIs under the purview of the tests to showcase the efficiency of query responses. The tangible insights from a retail dataset provide analytical information such as monthly orders, the minimum number of orders to provide discounts, etc. 

For the purpose of Quantiphi’s testing, 30GB and 100GB datasets were considered. The objective of the test was to determine the speed of execution of two distinct queries - with and without enabling the columnar engine. 

Testing and Configuration:

TPROC - H benchmarking dataset of  Hammer DB was ingested for the purpose of this test via the Hammer DB CLI.  The build schema and scale factor utility (30 and 100 respectively) were loaded in two separate AlloyDB instances.

For 30GB & 100GB data  
AlloyDB cluster with a primary and read pool instance
Primary machine type configuration: 16 vCPU 128GB

Flags ( in columnar):

alloydb.enable_pgaudit = on
pgaudit.log = all
alloydb.enable_pg_hint_plan = on
alloydb.enable_auto_explain = on
max_connections = 2000
google_columnar_engine.enabled=on
google_columnar_engine.memory_size_in_mb = 40960
google_columnar_engine.enable_columnar_recommendation=on
alloydb-blog-artwork


Scenario 1

The objective of the query is to find the sum of the extended price from the line item table and the total price from the orders table where the discount is 1% or 2% and a shipping date range of 2 days.

Query path

EXPLAIN (ANALYZE,COSTS OFF,SETTINGS,BUFFERS,TIMING,SUMMARY)
SELECT  l_shipdate, l_discount, l_shipinstruct,
       TO_CHAR(SUM(l_extendedprice),'999,999,999,999.00') AS lineItemTotal
       TO_CHAR(SUM(o_totalprice),'999,999,999,999.00') AS orderTotal
FROM lineitem, orders
WHERE l_discount IN (.01,.02)
AND (l_shipdate = '1993-11-26' OR l_shipdate = '1993-11-27')
AND o_orderkey = l_orderkey
GROUP BY l_shipdate, l_discount, l_shipinstructORDER BY lineitemtotal DESC;


Results

30GB Dataset
Without columnar engine:

Planning time: 0.292 ms
Execution time: 65350 ms
Total time: 65350.292 ms

With Columnar engine:
Planning time: 2.799 ms
Execution time: 9468 ms
Total time: 9470.799 ms
We found that the execution time was ~7 times faster after enabling the columnar engine. 

100GB Dataset
Without columnar engine:
Planning time: 0.318 ms
Execution time: 459693.716 ms 
Total Time -  459694.034 ms

With Columnar engine:
Planning Time: 10.295 ms
Execution Time: 275.720 ms 
Total Time: 286.015 ms
We found that the execution time was more than 200 times faster after enabling the columnar engine.

Click here to view the detailed query plan.

Scenario 2

The objective of the query is to find the number of orders on a particular day from the database.

Query path

EXPLAIN (ANALYZE,COSTS OFF,SETTINGS,BUFFERS,TIMING,SUMMARY)
SELECT COUNT(*) FROM lineitem WHERE l_shipdate between '1995-11-24 11:00:00' and '1995-11-25 11:00:00';


Results

30GB Dataset
Without columnar engine: 
Planning time: 0.176 ms
Execution time: 51874 ms
Total time: 51874.176 ms

With columnar engine: 
Planning time: 2.559 ms
Execution time:31 ms
Total time: 33.559 ms
We found that the execution time was more than 300 times faster after enabling the columnar engine. 

100GB Dataset
Without columnar engine:
Planning time: 0.099 ms 
Execution time: 424979.914 ms
Total time: 428427.207 ms

With columnar engine: 
Planning time: 11.452 ms
Execution time: 87.558 ms
Total time: 99.01 ms
We found that the execution time was more than 800 times faster after enabling the columnar engine. 

Click here to view the detailed query plan.

Conclusion 

During Quantiphi’s tests, we noticed that when the query is run without the columnar engine, the rows are removed by the default filter. However, when the columnar engine is enabled, it is removed from the scan using the columnar filter. This enables the scan to run only on the relevant rows, yielding a speedy result. This filter is an extension of the recommendation engine which processes the metadata to enable intelligent filtering. 

As illustrated in the test results above, the use of AlloyDB’s columnar engine resulted in faster execution times, ranging from a minimum of 7X to more than 800X faster performance. This clearly demonstrates AlloyDB's cutting-edge columnar engine capabilities. However, it is important to note that performance is contingent on certain factors such as the dataset used, configuration, filters, etc. The above indicators cannot be treated as a benchmark. 
Get in touch with our experts to know how you can modernize your database management with AlloyDB.

APPENDIX

Query Plan:
Scenario 1:
30GB Dataset
Without columnar engine:

QUERY PLAN
Sort (actual time=63846.339..65350.065 rows=16 loops=1)
Sort Key: (to_char (sum(lineitem.1_extendedprice), '999,999,999,999.00'::text)) DESC
Sort Method: quicksort
Memory: 27kB
Buffers: shared hit=766535 read=3209138, temp read=170084 written-183964
I/O Timings: read 138000.999
->
Finalize GroupAggregate (actual time=63843.850..65350.042 rows=16 loops=1)
Group Key: lineitem.1_shipdate, line item.l_discount, lineitem.1_shipinstruct
Buffers: shared hit=766535 read=3209138, temp read-170084 written=183964
I/O Timings: read=138000.999
-> Gather Merge (actual time=63843.674..65349.959 rows=48 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=766535 read=3209138, temp read=170084 written=183964
I/O Timings: read=138000.999
->
Partial GroupAggregate (actual time-63836.517..63838.938 rows=16 loops=3)
Group Key: lineitem.1_shipdate, lineitem.l_discount, lineitem.1_shipinstruct
Buffers: shared hit=766535 read-3209138, temp read-170084 written=183964
I/O Timings: read=138000.999
<-
tamp without time zone)))
Sort (actual time=63836.338..63836.811 rows=7314 loops=3)
Sort Key: lineitem.1_shipdate, lineitem.1_discount, line item.1_shipinstruct
Sort Method: quicksort Memory: 1225kB
Buffers: shared hit=766535 read-3209138, temp read=170084 written-183964
I/O Timings: read=138000.999
Worker 0: Sort Method: quicksort Memory: 1233kB
Worker 1: Sort Method: quicksort Memory: 1206kB
<-
Parallel Hash Join (actual time=62157.209..63830.934 rows=7314 loops=3)
Hash Cond: (lineitem.l_orderkey = orders.o_orderkey)
Buffers: shared hit=766393 read=3209138, temp read-170084 written-183964
I/O Timings: read=138000.999
->
Parallel Seq Scan on lineitem (actual time=7.719..55364.756 rows=7315 loops=3)
Filter: ((1_discount = ANY ('(0.01, 0.02)'::numeric[])) AND ((1_shipdate = '1993-11-26 00:00:00'::timestamp without time zone) OR (1_shipdate = '1993-11-27 00:00:00'::times

Rows Removed by Filter: 49111601
Buffers: shared hit=79303 read=3209138
I/O Timings: read=138000.999
Parallel Hash (actual time=6579.330..6579.331 rows=12277667 loops=3)
Buckets: 131072 Batches: 1024 Memory Usage: 2880kB
Buffers: shared hit=687072, temp written=171680
-> Parallel Seq Scan on orders (actual time=0.006..2687.244 rows=12277667 loops=3)
Buffers: shared hit=687072
Settings: effective_cache_size = '52767688kB'
Planning:
Buffers: shared hit=5
Planning Time: 0.292 ms
Execution Time: 65350.161 ms

Without columnar engine:

QUERY PLAN
Sort (actual time=7759.364..9467.893 rows=16 loops=1)
Sort Key: (to_char (sum (lineitem.1_extendedprice), '999,999,999,999.00'::text)) DESC
Sort Method: quicksort
Memory: 27kB
Buffers: shared hit=160, temp read=219818 written=233444
->
Finalize GroupAggregate (actual time=7756.807..9467.870 rows=16 loops=1)
Group Key: lineitem.1_shipdate, lineitem.1_discount, lineitem.1_shipinstruct
Buffers: shared hit=160, temp read-219818 written=233444
->
Gather Merge (actual time=7756.590..9467.790 rows=48 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=160, temp read=219818 written=233444
<-
Partial GroupAggregate (actual time=7749.261..7751.837 rows=16 loops=3)
Group Key: line item.1_shipdate, lineitem.1_discount, lineitem.1_shipinstruct
Buffers: shared hit=160, temp read=219818 written=233444
->
Sort (actual time=7749.071..7749.623 rows=7314 loops=3)
Sort Key: lineitem.1_shipdate, lineitem.1_discount, lineitem.1_shipinstruct
Sort Method: quicksort Memory: 1200kB
Buffers: shared hit=160, temp read=219818 written=233444
Worker 0: Sort Method: quicksort Memory: 1226kB
Worker 1: Sort Method: quicksort Memory: 1238kB
->
:timestamp without time zone)))
:timestamp without time zone)))
Parallel Hash Join (actual time=6672.808..7742.854 rows=7314 loops=3)
Hash Cond: (lineitem.1_orderkey = orders.o_orderkey)
Buffers: shared hit=18, temp read=219818 written=233444
-> Parallel Append (actual time=0.198..28.285 rows=7315 loops=3)
Parallel Custom Scan (columnar scan) on lineitem (actual time=0.196..27.524 rows=7315 loops=3)
Filter: ((1_discount = ANY ('(0.01, 0.02)'::numeric[])) AND ((1_shipdate = '1993-11-26 00:00:00'::timestamp without time zone) OR (1_shipdate = '1993-11-27 00:00:00':
Rows Removed by Columnar Filter: 49111601
Columnar projection mode: late materialization
Columnar cache search mode: native
Parallel Seq Scan on lineitem (never executed)
Filter: ((1_discount = ANY ('{0.01,0.02}'::numeric[])) AND ((1_shipdate = '1993-11-26 00:00:00'::timestamp without time zone) OR (1_shipdate = '1993-11-27 00:00:00':
Parallel Hash (actual time=6430.564.-6430.565 rows=12277667 loops=3)
Batches: 1024 Memory Usage: 2944kB
Buckets: 65536
Buffers: temp written=221160
->
Parallel Append (actual time=0.042..2460.903 rows=12277667 loops=3)
Settings: effective_cache_size = '52767688kB'
Planning:
Buffers: shared hit=5
Planning Time: 2.799 msExecution Time: 9468.371 ms

100GB Dataset
Without columnar engine:

QUERY PLAN
Sort (actual time=459691.788..459693.418 rows=16 loops=1)
Sort Key: (to_char (sum(lineitem.l_extendedprice), '999,999,999,999.00'::text)) DESC
Sort Method: quicksort
Memory: 27kB
Buffers: shared hit=592957 read=13353307, temp read=735 written=738
I/O Timings: read=1274471.107
->
Finalize GroupAggregate (actual time=459680.297..459693.395 rows=16 loops=1)
Group Key: lineitem.l_shipdate, lineitem.l_discount, lineitem.1_shipinstruct
I/O Timings: read=1274471.107
->
Gather Merge (actual time=459679.519..459693.317 rows=48 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=592957 read=13353307, temp read=735 written=738
I/O Timings: read=1274471.107
-> Partial GroupAggregate (actual time=459672.678..459683.685 rows=16 loops=3)
Group Key: lineitem.1_shipdate, lineitem.l_discount, lineitem.1_shipinstruct
Buffers: shared hit=592957 read-13353307, temp read=735 written=738
I/O Timings: read=1274471.107
-> Sort (actual time=459671.853..459676.439 rows=29978 loops=3)
Sort Key: lineitem.1_shipdate, lineitem.l_discount, lineitem.1_shipinstruct
Sort Method: external merge Disk: 1952kB
Buffers: shared hit=592957 read=13353307, temp read=735 written=738
I/O Timings: read=1274471.107
Worker 0: Sort Method: external merge Disk: 1976kB
Worker 1: Sort Method: external merge
->
Disk: 1952kB
Nested Loop (actual time=19.337..459610.968 rows=29978 loops=3)
Buffers: shared hit=592915 read=13353307
I/O Timings: read=1274471.107
-> Parallel Seq Scan on lineitem (actual time=19.313..459224.337 rows=29978 loops=3)
Filter: ((1_discount = ANY ('{0.01,0.02}'::numeric[])) AND ((1_shipdate = '1993-11-26 00:00:00':: timestamp without time zone) OR (1_shipdate = '1993-11-27 00:00:00'::timestamp without time zone)))
Rows Removed by Filter: 199972202
Buffers: shared hit=143248 read=13353307
I/O Timings: read=1274471.107
Index Scan using orders_pk on orders (actual time=0.009..0.009 rows=1 loops=89933)
Index Cond: (o_orderkey lineitem.l_orderkey)
=
Buffers: shared hit=449667
Planning:
Buffers: shared hit=16
Planning Time: 0.318 ms
Execution Time: 459693.716 ms

With columnar Engine:

QUERY PLAN
Sort (actual time=273.533..275.325 rows=16 loops=1)
Sort Key: (to_char (sum(lineitem.1_extendedprice), '999,999,999,999.00'::text)) DESC
Sort Method: quicksort Memory: 27kB
Buffers: shared hit=449709, temp read=737 written=740
->
Finalize GroupAggregate (actual time=262.504..275.305 rows=16 loops=1)
Group Key: line item.1_shipdate, lineitem.1_discount, lineitem.1_shipinstruct
Buffers: shared hit=449709, temp read=737 written=740
->
Gather Merge (actual time=261.636..275.231 rows=48 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=449709, temp read=737 written=740
->
Partial GroupAggregate (actual time=254.853..265.781 rows=16 loops=3)
Group Key: lineitem.1_shipdate, lineitem.1_discount, lineitem.1_shipinstruct
Buffers: shared hit=449709, temp read=737 written=740
->
Sort (actual time=254.046..258.591 rows=29978 loops=3)
Sort Key: lineitem.1_shipdate, lineitem.l_discount, lineitem.1_shipinstruct
Sort Method: external merge Disk: 2016kB
Buffers: shared hit=449709, temp read=737 written=740
Worker 0: Sort Method: external merge Disk: 1920KB
Worker 1: Sort Method: external merge Disk: 1960kB
->
Nested Loop (actual time=0.153..233.111 rows=29978 loops=3)
Buffers: shared hit=449667
->
Parallel Append (actual time=0.130..93.941 rows=29978 loops=3)
<-
Parallel Custom Scan (columnar scan) on lineitem (actual time=0.129..91.881 rows=29978 loops=3)
Filter: ((1_discount = ANY ('{0.01,0.02}'::numeric[])) AND ((1_shipdate = '1993-11-26 00:00:00'::timestamp without time zone)
OR (1_shipdate = '1993-11-27 00:00:00'::timestamp without time zone)))
->
Rows Removed by Columnar Filter: 199972202
Columnar projection mode: late materialization
Columnar cache search mode: native
Parallel Seg Scan on lineitem (never executed)
OR (1_shipdate = '1993-11-27 00:00:00'::timestamp without time zone)))
->
Filter: ((1_discount = ANY ('{0.01,0.02}'::numeric[])) AND ((1_shipdate = '1993-11-26 00:00:00'::timestamp without time zone)))
Index Scan using orders_pk on orders (actual time=0.004..0.004 rows=1 loops=89933)
Index Cond: (o_orderkey = lineitem.l_orderkey)
Buffers: shared hit=449667
Settings: effective_cache_size = '52767688kB'
Buffers: shared hit=449667
Settings: effective_cache_size = '52767688kB'
Planning:
Buffers: shared hit=16
Planning Time: 10.295 ms
Execution Time: 275.720 ms

Scenario 2:
30GB Dataset
Without columnar engine:

QUERY PLAN
Finalize Aggregate (actual time=51871.554..51874.774 rows=1 loops=1)
Buffers: shared hit=79399 read-3209042
I/O Timings: read=142116.120
Gather (actual time=51871.349..51874.758 rows=3 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=79399 read=3209042
I/O Timings: read=142116.120
-> Partial Aggregate (actual time=51864.047..51864.049 rows=1 loops=3)
Buffers: shared hit=79399 read-3209042
I/O Timings: read=142116.120
-> Parallel Seq Scan on lineitem (actual time=4.153..51849.431 rows=20306 loops=3)
Filter: ((1_shipdate >= '1995-11-24 11:00:00'::timestamp without time zone) AND (1_shipdate <= '1995-11-25 11:00:00'::timestamp without time zone))
Rows Removed by Filter: 49098610
Buffers: shared hit=79399 read=3209042
I/O Timings: read=142116.120
Settings: effective_cache_size
Planning:
Buffers: shared hit=2
Planning Time: 0.176 ms
Execution Time: 51874.830 ms

With columnar engine:

QUERY PLAN
Finalize Aggregate (actual time=28.464..31.412 rows=1 loops=1)
Gather (actual time=28.214..31.404 rows=3 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Partial Aggregate (actual time=21.097..21.099 rows=1 loops=3)
-> Parallel Append (actual time=0.159..21.087 rows=20306 loops=3)
-> Parallel Custom Scan (columnar scan) on lineitem (actual time=0.158..21.082 rows=20306 loops=3)
Filter: ((1_shipdate >= '1995-11-24 11:00:00'::timestamp without time zone) AND (1_shipdate <= '1995-11-25 11:00:00'::timestamp without time zone))
Rows Removed by Columnar Filter: 49098610
Rows Aggregated by Columnar Scan: 8989
Columnar cache search mode: native
Parallel Seq Scan on lineitem (never executed)
Filter: ((1_shipdate >= '1995-11-24 11:00:00'::timestamp without time zone) AND (1_shipdate <= '1995-11-25 11:00:00'::timestamp without time zone))
'52767688kB'
Settings: effective_cache_size
Planning:
Buffers: shared hit=2
Planning Time: 2.559 ms
Execution Time: 31.512 ms

100GB Dataset
Without columnar engine:

QUERY PLAN
Finalize Aggregate (actual time=428422.645..428425.229 rows=1 loops=1)
Buffers: shared hit=181822 read=13314733
I/O Timings: read=1237823.181
-> Gather (actual time=428422.488..428425.221 rows=3 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=181822 read=13314733
I/O Timings: read=1237823.181
QUERY PLAN
-> Partial Aggregate (actual time=428416.072..428416.073 rows=1 loops=3)
Buffers: shared hit=181822 read=13314733
I/O Timings: read=1237823.181
-> Parallel Seq Scan on lineitem (actual time=7.392..428375.550 rows=83252 loops=3)
Filter: ((1_shipdate >= '1995-11-24 11:00:00'::timestamp without time zone) AND (1_shipdate <= '1995-11-25 11:00:00'::timestamp without time zone))
Rows Removed by Filter: 199918927
Buffers: shared hit=181822 read=13314733
I/O Timings: read=1237823.181
Settings: effective_cache_size = '52767688kB'
Planning:
Buffers: shared hit=2
Planning Time: 0.099 ms
Execution Time: 428425.306 ms

With columnar engine:

QUERY PLAN
Finalize Aggregate (actual time-84.767..87.452 rows=1 loops=1)
-> Gather (actual time=84.569..87.446 rows=3 loops=1)
Workers Planned: 2
Workers Launched: 2
->
Partial Aggregate (actual time=78.057..78.059 rows=1 loops=3)
Parallel Append (actual time=0.107..78.052 rows=83252 loops=3)
<-
without time zone))
without time zone))
-> Parallel Custom Scan (columnar scan) on lineitem (actual time=0.106..78.048 rows=83252 loops=3)
Filter: ((1_shipdate >= '1995-11-24 11:00:00'::timestamp without time zone) AND (1_shipdate <= '1995-11-25 11:00:00'::timestamp
Rows Removed by Columnar Filter: 199918927
Rows Aggregated by Columnar Scan: 27756
Columnar cache search mode: native
Parallel Seq Scan on lineitem (never executed)
Filter: ((1_shipdate >= '1995-11-24 11:00:00'::timestamp without time zone) AND (1_shipdate <= '1995-11-25 11:00:00'::timestamp
Settings: effective_cache_size = '52767688kB'
Planning:
Buffers: shared hit=2
Planning Time: 11.452 msExecution Time: 87.558 ms
Nishant Jain, Vishv Patel And Purna Nelapala

Author

Nishant Jain, Vishv Patel And Purna Nelapala

Top Trending Blogs

Blog
Application Modernization

Reimagine Database Modernization on AlloyDB with Quantiphi

20 September 2023

Start Your Next Gen AI Journey Today

Discover how Quantiphi’s AI-powered solutions can transform your business. Fill out the form, and we’ll help you explore tailored AI strategies to unlock new opportunities for growth.

Thank you for reaching out to us!

Our experts will be in touch with you shortly.

In the meantime, explore our insightful blogs and case studies.

Something went wrong!

Please try it again.

Share