DEV Community

Frequent Re-Connections improved by Connection Manager

One anti-pattern in PostgreSQL is frequent reconnections, such as connecting for each statement. The same applies to YugabyteDB, which uses the PostgreSQL code for the query layer unless you enable the database resident connection pool (Connection Manager)

Static Connection

Here is an example of not enabling the YugabyteDB Connection Manager, so each connection has its backend process.

I've run with one static connection in a docker container to set a baseline.

docker run --rm -it yugabytedb/yugabyte:2.23.0.0-b710 bash
yugabyted start 

alias pgbench="$PWD/postgres/bin/ysql_bench -h $(hostname)"

yugabyted connect ysql <<<'show yb_is_client_ysqlconnmgr'

PGOPTIONS="-c client_min_messages=error" pgbench -iIdtpfg

pgbench -c 1 -nS -T 60 -P 10

Enter fullscreen mode Exit fullscreen mode

My small lab can process 1500 transactions per second with only one client staying connected.

[root@cbc7d018a6ea yugabyte]# pgbench -c 1 -nS -T 60 -P 10
progress: 10.0 s, 1582.5 tps, lat 0.631 ms stddev 0.314
progress: 20.0 s, 1602.1 tps, lat 0.624 ms stddev 0.057
progress: 30.0 s, 1548.8 tps, lat 0.646 ms stddev 0.054
progress: 40.0 s, 1530.6 tps, lat 0.653 ms stddev 0.048
progress: 50.0 s, 1546.6 tps, lat 0.647 ms stddev 0.046
progress: 60.0 s, 1548.3 tps, lat 0.646 ms stddev 0.053
transaction type: <builtin: select only>
scaling factor: 1
query mode: simple
number of clients: 1
number of threads: 1
batch size: 1024
duration: 60 s
number of transactions actually processed: 93590
maximum number of tries: 1
latency average = 0.641 ms
latency stddev = 0.138 ms
tps = 1559.814204 (including connections establishing)
tps = 1560.329182 (excluding connections establishing)
Enter fullscreen mode Exit fullscreen mode

Dynamic (re)Connections

With --connect or -C, PgBench connects for each transaction:

pgbench -C -c 1 -nS -T 60 -P 10
Enter fullscreen mode Exit fullscreen mode

In this lab, with a re-connection for each transaction, the throughput dropped from 1500 to 20 transactions per second.

[root@cbc7d018a6ea yugabyte]# pgbench -C -c 1 -nS -T 60 -P 10
progress: 10.0 s, 19.7 tps, lat 33.632 ms stddev 1.838
progress: 20.0 s, 19.7 tps, lat 33.639 ms stddev 1.776
progress: 30.0 s, 19.8 tps, lat 33.591 ms stddev 1.884
progress: 40.0 s, 19.5 tps, lat 34.125 ms stddev 1.576
progress: 50.0 s, 19.8 tps, lat 33.727 ms stddev 1.974
progress: 60.0 s, 19.8 tps, lat 33.584 ms stddev 1.877
transaction type: <builtin: select only>
scaling factor: 1
query mode: simple
number of clients: 1
number of threads: 1
batch size: 1024
duration: 60 s
number of transactions actually processed: 1184
maximum number of tries: 1
latency average = 33.715 ms
latency stddev = 1.835 ms
tps = 19.717604 (including connections establishing)
tps = 29.626280 (excluding connections establishing)
Enter fullscreen mode Exit fullscreen mode

The solution is to avoid this anti-pattern and maintain static connections. However, this is not always easy, especially with microservices, each with its application connection pool that must be dynamic to avoid allocating too many idle connections.

Connection Manager

A database resident connection pool reduces the impact by re-using existing physical connections for logical re-connections. I restarted YugabyteDB with enable_ysql_conn_mgr set to true and ran the same pgbench -C.

yugabyted stop
yugabyted start --tserver_flags=enable_ysql_conn_mgr=true,allowed_preview_flags_csv={enable_ysql_conn_mgr}

pgbench -C -c 1 -nS -T 60 -P 10

Enter fullscreen mode Exit fullscreen mode

The throughput cannot be the same because reconnections still impact the protocol communication. However, the throughput increased tenfold in this small lab thanks to the connection pool.

[root@cbc7d018a6ea yugabyte]# pgbench -C -c 1 -nS -T 60 -P 10
progress: 10.0 s, 318.4 tps, lat 0.854 ms stddev 0.056
progress: 20.0 s, 317.0 tps, lat 0.864 ms stddev 0.055
progress: 30.0 s, 316.6 tps, lat 0.851 ms stddev 0.054
progress: 40.0 s, 316.6 tps, lat 0.866 ms stddev 0.060
progress: 50.0 s, 262.1 tps, lat 0.857 ms stddev 0.058
progress: 60.0 s, 209.3 tps, lat 0.857 ms stddev 0.063
transaction type: <builtin: select only>
scaling factor: 1
query mode: simple
number of clients: 1
number of threads: 1
batch size: 1024
duration: 60 s
number of transactions actually processed: 17402
maximum number of tries: 1
latency average = 0.858 ms
latency stddev = 0.058 ms
tps = 290.011636 (including connections establishing)
tps = 1119.051187 (excluding connections establishing)

Enter fullscreen mode Exit fullscreen mode

It is the same with prepared statements:

[root@cbc7d018a6ea yugabyte]# pgbench -M prepared -C -c 1 -nS -T 60 -P 10
progress: 10.0 s, 299.9 tps, lat 1.072 ms stddev 0.116
progress: 20.0 s, 296.6 tps, lat 1.084 ms stddev 0.108
progress: 30.0 s, 294.5 tps, lat 1.091 ms stddev 0.102
progress: 40.0 s, 297.3 tps, lat 1.085 ms stddev 0.133
progress: 50.0 s, 270.7 tps, lat 1.117 ms stddev 0.110
progress: 60.0 s, 206.4 tps, lat 1.079 ms stddev 0.122
transaction type: <builtin: select only>
scaling factor: 1
query mode: prepared
number of clients: 1
number of threads: 1
batch size: 1024
duration: 60 s
number of transactions actually processed: 16655
maximum number of tries: 1
latency average = 1.088 ms
latency stddev = 0.116 ms
tps = 277.577689 (including connections establishing)
tps = 890.956835 (excluding connections establishing)
Enter fullscreen mode Exit fullscreen mode

The YugabyteDB Connection Managers solve two issues by including a connection pool in the database. This reduces the resources idle connections use and minimizes the connection overhead, as logical connections can retrieve a physical connection from the pool.

Top comments (0)