DEV Community

Cover image for Data Rollup in YugabyteDB🚀 and PostgreSQL🐘
Franck Pachot for YugabyteDB Distributed PostgreSQL Database

Posted on • Edited on

Data Rollup in YugabyteDB🚀 and PostgreSQL🐘

When you load timeseries from sensors, you store all details for fine grained analysis. But keeping this level of detail, the database will grow. With YugabyteDB, this is not a technical problem because you can scale-out the cluster. But you probably don't need to keep this level of detail forever.

One solution in NoSQL databases is to enable TTL (Time To Live) to purge old data when expired. You can do that with YugabyteDB YCQL tables (the Cassandra-like API). But with YSQL you can do better: keep a summary of old data, aggregated to a higher level, for example per day instead of per millisecond.

Here is an example where I keep the details for one week, and rollup the prior events to summarize them by day.

Demo table: events

I create a demo table events with a timestamp as a bigint Epoch, a device_id, and a measure that can be averaged.


create extension if not exists pgcrypto;

create table events (
            primary key(id)
            ,id uuid not null default gen_random_uuid()
            ,timestamp bigint not null
            ,device_id text not null
            ,measure real
);

create index events_timestamp on events(timestamp desc, id);

Enter fullscreen mode Exit fullscreen mode

For my sample data I generate random values for 3 devices, with measures between 0 and 1, a random timestamp in the past month, for a total of 10000 events per device.

with
devices as (
 select
  to_char(1000*random(),'XFM099') as device_id
 from generate_series(1,3) n
                      ---
),
timestamps as (
 select
  extract (epoch from now()-random()*interval '1 month') as timestamp
                                               -------
 from generate_series(1,10000) n
                      -------
)
insert into events(device_id, timestamp, measure)
 select device_id, timestamp, random()
 from timestamps, devices
;

Enter fullscreen mode Exit fullscreen mode

Of course, you can test with more rows (I checked the execution plan, this scales thanks to the index on the timestamp). You can store the timestamp as timestamptz instead of bigint. You can define the primary key as (device_id hash, timestamp desc) rather than a uuid. This depends on your data but the idea is the same.

Before rollup

To visualize those events, I've built a Grafana Dashboard Panel of type "Time series" on SELECT device_id, timestamp as time, measure FROM events order by time. The line interpolation is "Step before" because I'll rollup to the end timestamp of each group:

before

I displayed the legend with the number of events: 9973 in this window on the "Last 30 days".

There is too much detail in this vizualization. The 3 devices have the same average values but the blue one looks like it is higher, just because it is displayed on top. This level of detail is interesting only when zooming into it. A view on a monthly time window should aggregate them by day.

I can do it with a GROUP BY in the SQL query but I'll do better: replace the details older than one week by their per-day average.

Rollup past data

I can do this with a single SQL query that:

  • find rows to delete: timestamp older than one week
  • DELETE those rows from the distributed storage and return them to the SQL layer
  • aggregate them with a GROUP BY device and BY day
  • INSERT this summary with the highest timestamp of the window

And because SQL is ACID, no additional code is required to:

  • handle failure scenarios: this operation is atomic and is rolled back if it cannot complete for any reason
  • handle concurrency conflicts: even if a late sensor inserts a value in the time window that is aggregated, all will be consistent with no event missed and no duplicates. Concurrent users reading the same time windows will always see a non-overlapping set of aggregates or detail
  • handle sync and async replication: with YugabyteDB the replication for high availability or local reads is automatic and consistent

Here is the SQL query that declares the rows to be deleted, the rollup, as CTE (Common Table Expression) in WITH clause, and the insert from them:

with

delete as (
 delete from events
 where timestamp <extract(epoch from now() - interval '1 week' )::bigint
                                                       ------
 returning *
),

rollup as (
 select
  device_id
  ,max(timestamp) as timestamp
  ,avg(measure) as measure
   ---
 from delete
 group by
  device_id
  ,date_trunc('day',to_timestamp(timestamp))
               ---
) 

insert into events (device_id, timestamp, measure)
select * from rollup
;

Enter fullscreen mode Exit fullscreen mode

Of course, all variations are possible of you have different columns, aggregation function, time windows, or if you want to store the aggregates into another table. You have all the power of PostgreSQL for this.

Here is the same Grafana Dashboard after a simple refresh:
after

There are 2295 points per device, instead of 9973 before the rollup.

On a large time window of 30 days I can see the per-day average (around 0.5 for each device as I built it with random()). For the past week, I still have all details, too many for this visualization. This allows keeping a growing history without increasing the database size too much.

Here is a zoom into the day one week ago where it is easy to see the granularity (per-day before, per-event after):
zoom

All this works the same in YugabyteDB and in PostgreSQL. YugabyteDB adds scalability (great for timeseries with high throughput data ingest) and resilience (the database is online even during upgrades or failures). You can simply schedule this query execution without any risk of data corruption.

Top comments (0)