DEV Community

Cover image for Data Residency Compliance: PostgreSQL RLS to access YugabyteDB local region
Franck Pachot for YugabyteDB Distributed PostgreSQL Database

Posted on • Edited on

Data Residency Compliance: PostgreSQL RLS to access YugabyteDB local region

YugabyteDB can provide geo-partitioning functionality to PostgreSQL applications. This means it allows you to partition your data based on specific geographic locations or tenant identifiers. The partitions are assigned to tablespaces and restricted to a single region through placement blocks. By utilizing proven PostgreSQL features such as partitions, tablespaces, and Row-Level Security, your data can be appropriately distributed across regions and replicated within their availability zones. This ensures compliance with data residency constraints and guarantees the physical placement of data using reliable and proven technology.

The application is generally responsible for setting the partitioning column value whenever new rows are inserted and including a condition for queries. However, suppose you would like to make this process more transparent to the application. In that case, a solution involves using a session variable to set the default value and establishing a Row-Level Security (RLS) policy. I have provided an example using pgbench in a previous post: hyper-scale multi-tenant for SaaS: an example with pgbench

Instead of manually selecting a partition to query based on data residency and latency concerns, consider two solutions using YugabyteDB functions that use the information available from your application's connection.

  • The YugabyteDB function yb_is_local_table filters the rows from the table or partition to read only those in the local region. Simply provide the OID of a table or partition as an argument, and the process will return true only if the tablespace is placed within the same region as the user's connection.
  • The YugabyteDB function yb_server_region returns the name of the region that you can map, in a SQL function, to the partition key values that belongs to the region.

Here is a straightforward example showing both.

I have deployed a YugabyteDB cluster across three different regions: North, South and West:

yugabyte=# select host, cloud, region, zone from yb_servers();

   host    | cloud | region | zone
-----------+-------+--------+-------
 10.0.0.40 | cloud | north  | zone1
 10.0.0.41 | cloud | south  | zone2
 10.0.0.39 | cloud | west   | zone3
(3 rows)
Enter fullscreen mode Exit fullscreen mode

In real live, you will have multiple nodes in each region, over 3 availability zone, and can set Replication Factor 3 to be resilient to failures or maintenance. Here, I create 3 tablespaces, one in each region, with Replication Factor 1:

create tablespace "north" with ( replica_placement= $$
{
    "num_replicas": 1,
    "placement_blocks": [ { "cloud": "cloud", "region": "north", "zone": "zone1"   , "min_num_replicas": 1 } ]
} $$) ;
create tablespace "south" with ( replica_placement= $$
{
    "num_replicas": 1,
    "placement_blocks": [ { "cloud": "cloud", "region": "south", "zone": "zone2"   , "min_num_replicas": 1 } ]
} $$) ;
create tablespace "west" with ( replica_placement= $$
{
    "num_replicas": 1,
    "placement_blocks": [ { "cloud": "cloud", "region": "west", "zone": "zone3"   , "min_num_replicas": 1 } ]
} $$) ;
Enter fullscreen mode Exit fullscreen mode

I am planning to create a table named demo that will be partitioned based on the tenant_id column. The data will be divided into three partitions, namely demo_n, demo_s and demo_w, assigned to the North, South, and West regions, respectively.

I'll add the possibility to automate the process of specifying the tenant_id or partition name. For this, I will use the YugabyteDB function yb_is_local_table() to identify the partition that corresponds to the region I am connected to.

To assign the tenant_id by default, I will create a function that returns the tenant_id for the partition that belongs to my current region. The function will take a prefix, according to my naming convention, and identify which partition is local to my region.

create or replace function my_tenant_id(tablename text) 
 returns char as $SQL$
select case
 when yb_is_local_table((tablename||'_n')::regclass) then 'n'
 when yb_is_local_table((tablename||'_s')::regclass) then 's'
 when yb_is_local_table((tablename||'_w')::regclass) then 'w'
end;
$SQL$ language sql immutable
;
Enter fullscreen mode Exit fullscreen mode

Another possibility if you can hardcode the mapping between the region name and the tenant_id for all tables will be faster:

create or replace function my_tenant_id() returns char as $$
select case yb_server_region()
 when 'north' then 'n'
 when 'south' then 's'
 when 'west' then 'w'
end;
$$ language sql immutable
;
Enter fullscreen mode Exit fullscreen mode

I create the partitioned table that uses this function to set the tenant_id when not provided in the insert statement:

create table demo (
 primary key(tenant_id, id)
 , tenant_id char default my_tenant_id()
 , id bigserial
 , message text
) partition by list (tenant_id);
create table demo_n partition of demo
  for values in ('n') tablespace north;
create table demo_s partition of demo
  for values in ('s') tablespace south;
create table demo_w partition of demo
  for values in ('w') tablespace west
;
Enter fullscreen mode Exit fullscreen mode

To query only the local partition, I define a RLS policy that filters on yb_is_local_table():

create policy l on demo to public
 using ( yb_is_local_table(tableoid) );
alter table demo enable row level security;
Enter fullscreen mode Exit fullscreen mode

Note that because of issue #20860 that runs the function for each row (because tableoid) it might be preferable to use the the other solution, with my my_tenant_id() function:

create policy l on demo to public
 using ( tenant_id=my_tenant_id() );
alter table demo enable row level security;
Enter fullscreen mode Exit fullscreen mode

You can chosoe one or the other. I'll show the result with both solutions.

Because I enabled but didn't fore the policy, the owner of the table can insert and query data for all regions:

yugabyte=# insert into demo select 's',generate_series(1,10),'...';
INSERT 0 10
yugabyte=# insert into demo select 'n',generate_series(1,10),'...';
INSERT 0 10
yugabyte=# insert into demo select 'w',generate_series(1,10),'...';
INSERT 0 10

yugabyte=# explain (costs off, analyze) select * from demo
;
                             QUERY PLAN
---------------------------------------------------------------------
 Append (actual time=1.171..2.235 rows=30 loops=1)
  ->  Seq Scan on demo_n (actual time=1.169..1.173 rows=10 loops=1)
  ->  Seq Scan on demo_s (actual time=0.668..0.671 rows=10 loops=1)
  ->  Seq Scan on demo_w (actual time=0.378..0.382 rows=10 loops=1)
 Planning Time: 0.141 ms
 Execution Time: 2.277 ms
 Peak Memory Usage: 51 kB
(7 rows)
Enter fullscreen mode Exit fullscreen mode

To test what I've declared to access only the regional partitions, I'll connect with a user where the RLS policy applies:

create user franck;
grant all on demo to franck;
\c - franck
Enter fullscreen mode Exit fullscreen mode

This user is connected to West region and sees only the local data:

yugabyte=# select * from demo;

 tenant_id | id | message
-----------+----+---------
 w         |  1 | ...
 w         |  2 | ...
 w         |  3 | ...
 w         |  4 | ...
 w         |  5 | ...
 w         |  6 | ...
 w         |  7 | ...
 w         |  8 | ...
 w         |  9 | ...
 w         | 10 | ...
(10 rows)

yugabyte=# explain (costs off, analyze)
select * from demo;
                             QUERY PLAN
---------------------------------------------------------------------
 Append (actual time=0.669..0.829 rows=10 loops=1)
   ->  Seq Scan on demo_w (actual time=0.668..0.826 rows=10 loops=1)
         Filter: yb_is_local_table(tableoid)
 Planning Time: 0.172 ms
 Execution Time: 0.862 ms
 Peak Memory Usage: 47 kB
(6 rows)
Enter fullscreen mode Exit fullscreen mode

Without specifying anything, the PostgreSQL RLS policy has added the yb_is_local_table(tableoid) filter and YugabyteDB has restricted the list of partitions to the ones in the local region where I'm connected.

With the other solution, the policy that uses yb_server_region(), declared IMMUTABLE to be resolved at planning time, the result is even faster:

yugabyte=> explain (costs off, analyze)
           select * from demo
;
                                       QUERY PLAN
-----------------------------------------------------------------------------------------
 Append (actual time=0.915..0.923 rows=10 loops=1)
   ->  Index Scan using demo_w_pkey on demo_w (actual time=0.913..0.919 rows=10 loops=1)
         Index Cond: (tenant_id = 'w'::bpchar)
 Planning Time: 0.322 ms
 Execution Time: 0.950 ms
 Peak Memory Usage: 25 kB
(6 rows)
Enter fullscreen mode Exit fullscreen mode

I can try to insert data for all region, but thanks to the RLS policy inserting into the regions that are not mine is rejected:

yugabyte=# insert into demo values ('s',11,'south');
ERROR:  42501: new row violates row-level security policy for table "demo"

yugabyte=# insert into demo values ('n',11,'north');
ERROR:  42501: new row violates row-level security policy for table "demo"

yugabyte=# insert into demo values ('w',11,'west');
INSERT 0 1

Enter fullscreen mode Exit fullscreen mode

Additionally, my function can set my local region as the default for tenant_id:

yugabyte=# insert into demo(id,message)
 values (42,'Hello World')
 returning *;

 tenant_id | id |   message
-----------+----+-------------
 w         | 42 | Hello World
(1 row)

INSERT 0 1
Enter fullscreen mode Exit fullscreen mode

Finally, as some WHERE clauses are added by the RLS policy, I'm checking that Batched Nested Loop still work, with and without mentioning the tenant_id:

yugabyte=# set yb_bnl_batch_size=1024;
SET

yugabyte=# explain (costs off, analyze)
select * from demo a join demo b using(tenant_id,id);
                                                                  QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------
 YB Batched Nested Loop Join (actual time=164.455..164.668 rows=12 loops=1)
   Join Filter: ((a.tenant_id = b.tenant_id) AND (a.id = b.id))
   ->  Append (actual time=163.172..163.370 rows=12 loops=1)
         ->  Seq Scan on demo_w a (actual time=163.170..163.365 rows=12 loops=1)
               Filter: yb_is_local_table(tableoid)
   ->  Append (actual time=1.011..1.210 rows=12 loops=1)
         ->  Index Scan using demo_w_pkey on demo_w b (actual time=0.840..1.036 rows=12 loops=1)
               Index Cond: (ROW(tenant_id, id) = ANY (ARRAY[ROW(a.tenant_id, a.id), ROW($1, $1025), ROW($2, $1026), ..., ROW($1023, $2047)]))
               Filter: yb_is_local_table(tableoid)
 Planning Time: 2.287 ms
 Execution Time: 165.525 ms
 Peak Memory Usage: 1861 kB
(12 rows)

yugabyte=# explain (costs off, analyze)
select * from demo a join demo b using(tenant_id,id)
where a.tenant_id='w'
;
                                               QUERY PLAN
--------------------------------------------------------------------------------------------------------
 YB Batched Nested Loop Join (actual time=170.645..170.856 rows=12 loops=1)
   Join Filter: (a.id = b.id)
   ->  Append (actual time=168.278..168.477 rows=12 loops=1)
         ->  Index Scan using demo_w_pkey on demo_w a (actual time=168.276..168.472 rows=12 loops=1)
               Index Cond: (tenant_id = 'w'::bpchar)
               Filter: yb_is_local_table(tableoid)
   ->  Append (actual time=2.111..2.312 rows=12 loops=1)
         ->  Index Scan using demo_w_pkey on demo_w b (actual time=2.075..2.273 rows=12 loops=1)
               Index Cond: ((id = ANY (ARRAY[a.id, $1, $2, ..., $1023])) AND (tenant_id = 'w'::bpchar))
               Filter: yb_is_local_table(tableoid)
 Planning Time: 0.942 ms
 Execution Time: 171.395 ms
 Peak Memory Usage: 517 kB
(13 rows)

yugabyte=# explain (costs off, analyze)
select * from demo a join demo b using(tenant_id,id)
where b.tenant_id='w'
;
                                               QUERY PLAN
--------------------------------------------------------------------------------------------------------
 YB Batched Nested Loop Join (actual time=167.274..167.482 rows=12 loops=1)
   Join Filter: (a.id = b.id)
   ->  Append (actual time=166.280..166.492 rows=12 loops=1)
         ->  Index Scan using demo_w_pkey on demo_w a (actual time=166.278..166.488 rows=12 loops=1)
               Index Cond: (tenant_id = 'w'::bpchar)
               Filter: yb_is_local_table(tableoid)
   ->  Append (actual time=0.727..0.924 rows=12 loops=1)
         ->  Index Scan using demo_w_pkey on demo_w b (actual time=0.707..0.901 rows=12 loops=1)
               Index Cond: ((id = ANY (ARRAY[a.id, $1, $2, ..., $1023])) AND (tenant_id = 'w'::bpchar))
               Filter: yb_is_local_table(tableoid)
 Planning Time: 0.946 ms
 Execution Time: 167.662 ms
 Peak Memory Usage: 517 kB
(13 rows)
Enter fullscreen mode Exit fullscreen mode

With the policy using yb_server_region() it is much faster:

yugabyte=> explain (costs off, analyze)
select * from demo a join demo b using(tenant_id,id);
                                               QUERY PLAN

--------------------------------------------------------------------------------------------------------
 YB Batched Nested Loop Join (actual time=1.147..1.166 rows=12 loops=1)
   Join Filter: (a.id = b.id)
   ->  Append (actual time=0.676..0.685 rows=12 loops=1)
         ->  Index Scan using demo_w_pkey on demo_w a (actual time=0.675..0.681 rows=12 loops=1)
               Index Cond: (tenant_id = 'w'::bpchar)
   ->  Append (actual time=0.414..0.423 rows=12 loops=1)
         ->  Index Scan using demo_w_pkey on demo_w b (actual time=0.397..0.403 rows=12 loops=1)
               Index Cond: ((id = ANY (ARRAY[a.id, $1, $2, ..., $1023])) AND (tenant_id = 'w'::bpchar))
 Planning Time: 0.975 ms
 Execution Time: 1.314 ms
 Peak Memory Usage: 502 kB
(11 rows)
Enter fullscreen mode Exit fullscreen mode

So, until issue #20860 is fixed, it is better to use (tenant_id=my_tenant_id()) rather than (yb_is_local_table(tableoid)).

By using this solution, you can ensure that your data complies with data residency requirements, as it will be stored in the appropriate region. The application is only responsible for connecting to the correct region. The technology used to restrict the tables and indexes to the local region is well-established and reliable: PostgreSQL Row Level Security. The load balancer in YugabyteDB, that re-distributes tablets within the placement blocks, guarantees that data is stored in the correct region, at the storage level. Additionally, it's compatible with high availability (when you have multiple nodes in each region), and elasticity (data is rebalanced to new nodes within the same region).

Top comments (0)