In theory, in a relational database, you should be able to interact (any DML) with all tables through a view. But there may be some implementation details, limitations, or bugs changing the behavior. This means that, if you are doing some thicky things through a view, you should test them carefully. However, be careful that unit tests are not sufficient. You must validate the concurrent session conflicts behavior: isolation levels and implicit and explicit locking.
I'm taking an example that can be reproduced in the latest PostgreSQL. I've also tested on Amazon Aurora with PostgreSQL 14.6 compatibility, this is where I've taken the output for this blog post.
Connect from psql
I connect to it with all connection information in environment variables:
$ PGHOST=pg.cluster-...eu-west-1.rds.amazonaws.com psql
psql (14.5, server 14.6)
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
Type "help" for help.
postgres=> select inet_server_addr();
inet_server_addr
------------------
172.31.42.200
(1 row)
postgres=> \! psql -c "select inet_server_addr()"
inet_server_addr
------------------
172.31.42.200
(1 row)
The point of the above connection test is to verify that I can connect two sessions on the same database though \! psql
as the environment variables are propagated to the child process.
Create small tables for the test
I create a demo table and a VIEW on it with a UNION ALL:
create table demo_table
as select generate_series(1,3) as id , 0 as value;
vacuum analyze demo_table;
CREATE OR REPLACE VIEW demo_view AS
select id,value from demo_table where mod(id,2)=0
union all
select id,value from demo_table where mod(id,2)=1
;
Simple test case with two sessions
I'll run a transaction with two SELECT FOR UPDATE on the demo table but through the VIEW. This is implicit locking when we want to read the latest state (rather than an MVCC snapshot). It blocks any concurrent changes on those rows so to guarantee that the state that was read is still the latest.
While the transaction is still ongoing, I start another session (calling psql
with \!
- my connection credentials are all in the environment) to run an update of those rows. This is a very simple way to test with two sessions. I expect that it waits for the completion of the ongoing transaction (pessimistic locking):
begin transaction;
select * from demo_view for update;
\! psql -ec "UPDATE demo_table SET value = 1"
select * from demo_view for update;
rollback;
However, here is the result:
postgres=# select version();
version
-----------------------------------------------------------------------------------------------------------------------------------
PostgreSQL 13.7 on aarch64-unknown-linux-gnu, compiled by aarch64-unknown-linux-gnu-gcc (GCC) 7.4.0, 64-bit
(1 row)
postgres=# begin transaction;
BEGIN
postgres=*# select * from demo_view for update;
id | value
----+-------
2 | 0
1 | 0
3 | 0
(3 rows)
postgres=*# \! psql -ec "UPDATE demo_table SET value = 1"
Pager usage is off.
UPDATE demo_table SET value = 1
UPDATE 3
postgres=*#
postgres=*# select * from demo_view for update;
id | value
----+-------
2 | 1
1 | 1
3 | 1
(3 rows)
postgres=*# rollback;
ROLLBACK
The UPDATE didn't wait and the two SELECT FOR UPDATE. The two SELECT in the same transaction do not show the same state. Obviously, this is a bug. I filed BUG #17770.
Execution plan
The goal of this blog post is to raise some awareness on what can happen, and can be missed if race conditions are not tested. The bug can be reported to the PostgreSQL mailing list, or the managed service support in the case of Aurora.
As mentioned, the best is to test all expected race condition. Another way is to look at the execution plan, which exposes some of the implementation details, as it may give a clue about what can go wrong:
postgres=# explain (verbose, analyze, buffers)
select * from demo_view for update;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------
LockRows (cost=0.00..2.14 rows=2 width=40) (actual time=0.011..0.018 rows=3 loops=1)
Output: "*SELECT* 1".id, "*SELECT* 1".value, (ROW("*SELECT* 1".id, "*SELECT* 1".value))
Buffers: shared hit=2
-> Append (cost=0.00..2.12 rows=2 width=40) (actual time=0.011..0.016 rows=3 loops=1)
Buffers: shared hit=2
-> Subquery Scan on "*SELECT* 1" (cost=0.00..1.05 rows=1 width=40) (actual time=0.011..0.012 rows=1 loops=1)
Output: "*SELECT* 1".id, "*SELECT* 1".value, ROW("*SELECT* 1".id, "*SELECT* 1".value)
Buffers: shared hit=1
-> Seq Scan on public.demo_table (cost=0.00..1.04 rows=1 width=8) (actual time=0.008..0.009 rows=1 loops=1)
Output: demo_table.id, demo_table.value
Filter: (mod(demo_table.id, 2) = 0)
Rows Removed by Filter: 2
Buffers: shared hit=1
-> Subquery Scan on "*SELECT* 2" (cost=0.00..1.05 rows=1 width=40) (actual time=0.002..0.003 rows=2 loops=1)
Output: "*SELECT* 2".id, "*SELECT* 2".value, ROW("*SELECT* 2".id, "*SELECT* 2".value)
Buffers: shared hit=1
-> Seq Scan on public.demo_table demo_table_1 (cost=0.00..1.04 rows=1 width=8) (actual time=0.001..0.002 rows=2 loops=1)
Output: demo_table_1.id, demo_table_1.value
Filter: (mod(demo_table_1.id, 2) = 1)
Rows Removed by Filter: 1
Buffers: shared hit=1
Query Identifier: -1810003173879754115
Planning Time: 0.083 ms
Execution Time: 0.050 ms
(24 rows)
I displayed BUFFERS which shows that the LockRows
is actually touching some data. I also displayed VERBOSE which shows which columns are available in the rowset operation: "*SELECT* 1".id, "*SELECT* 1".value, (ROW("*SELECT* 1".id, "*SELECT* 1".value))
. This looks like a temporary table from the result. How can the LockRows
find which rows to lock without a ctid
?
An execution plan when doing the same on the table rather than the view shows id, value, ctid
:
QUERY PLAN
------------------------------------------------------------------------------------------------------------------
LockRows (cost=0.00..1.06 rows=3 width=14) (actual time=0.055..0.063 rows=3 loops=1)
Output: id, value, ctid
Buffers: shared hit=4
-> Seq Scan on public.demo_table (cost=0.00..1.03 rows=3 width=14) (actual time=0.007..0.008 rows=3 loops=1)
Output: id, value, ctid
Buffers: shared hit=1
Query Identifier: -5183708313868986861
Planning:
Buffers: shared hit=3
Planning Time: 0.071 ms
Execution Time: 0.083 ms
(11 rows)
I'm convinced that, in addition to tests covering the expected race conditions, the execution plan should be verified for non-trivial queries. You need to trust the query execution before releasing it into production because those kind of anomalies will be rare, the wrong result being not immediately visible, and the troubleshooting very difficult. It is also good to think about how it could work. The FOR UPDATE applies to the table, not the result, and a UNION ALL can come from many tables.
I got the heads up on this case from a colleague checking for an optimization suggested by a user. He did the right thing: checking lock behavior. As he is implementing the locking mechanisms in YugabyteDB, you can see the high quality of engineering. The PostgreSQL compatibility of YugabyteDB, like Amazon Aurora, inherits the same behavior, so be careful and avoid to SELECT FOR UPDATE on UNION ALL views.
Top comments (0)