DEV Community

Cover image for How PostgreSQL aggregation works and how it inspired our hyperfunctions’ design
davidkohn88 for Timescale

Posted on • Edited on • Originally published at blog.timescale.com

How PostgreSQL aggregation works and how it inspired our hyperfunctions’ design

Table of contents

  1. A primer on PostgreSQL aggregation (through pictures)
  2. Two-step aggregation in TimescaleDB hyperfunctions
  3. Why we use the two-step aggregate design pattern
  4. Two-step aggregation + continuous aggregates in TimescaleDB
  5. An example of how the two-step aggregate design impacts hyperfunctions’ code
  6. Summing it up

Get a primer on PostgreSQL aggregation, how PostgreSQL’s implementation inspired us as we built TimescaleDB hyperfunctions and its integrations with advanced TimescaleDB features – and what this means for developers.

At Timescale, our goal is to always focus on the developer experience, and we take great care to design our products and APIs to be developer-friendly. We believe that when our products are easy to use and accessible to a wide range of developers, we enable them to solve a breadth of different problems – and thus build solutions that solve big problems.

This focus on developer experience is why we made the decision early in the design of TimescaleDB to build on top of PostgreSQL. We believed then, as we do now, that building on the world’s fastest-growing database would have numerous benefits for our users.

Perhaps the biggest of these advantages is developer productivity: developers can use the tools and frameworks they know and love and bring all of their SQL skills and expertise.

Today, there are nearly three million active TimescaleDB databases running mission-critical time-series workloads across industries. Time-series data comes at you fast, sometimes generating millions of data points per second (read more about time-series data). Because of this volume and rate of information, time-series data is complex to query and analyze. We built TimescaleDB as a purpose-built relational database for time-series to reduce that complexity so that developers can focus on their applications.

So, we’re built with developer experience at our core, and we’ve continually released functionality to further this aim, including continuous aggregates, user-defined actions, informational views, and most recently, TimescaleDB hyperfunctions: a series of SQL functions within TimescaleDB that make it easier to manipulate and analyze time-series data in PostgreSQL with fewer lines of code.

To ensure we stay focused on developer experience as we plan new hyperfunctions features, we established a set of “design constraints” that guide our development decisions. Adhering to these guidelines ensures our APIs:

  • Work within the SQL language (no new syntax, just functions and aggregates)
  • Intuitive for new and experienced SQL users
  • Useful for just a few rows of data and high-performance with billions of rows
  • Play nicely with all TimescaleDB features, and ideally, makes them more useful to users
  • Make fundamental things simple to make more advanced analyses possible

What does this look like in practice? In this post, I explain how these constraints led us to adopt two-step aggregation throughout TimescaleDB hyperfunctions, how two-step aggregates interact with other TimescaleDB features, and how PostgreSQL's internal aggregation API influenced our implementation.

When we talk about two-step aggregation, we mean the following calling convention:
code: SELECT average(time_weight('LOCF', value)) as time_weighted_average FROM foo;<br>
-- or<br>
SELECT approx_percentile(0.5, percentile_agg(value)) as median FROM bar;<br>

Where we have an inner aggregate call:
The same as the previous in terms of code, except the sections: time_weight('LOCF', value) and percentile_agg(value) are highlighted

And an outer accessor call:
The same as the previous in terms of code, except the sections: average(time_weight('LOCF', value)) and approx_percentile(0.5, percentile_agg(value)) are highlighted

We chose this design pattern over the more common - and seemingly simpler - one-step aggregation approach, in which a single function encapsulates the behavior of both the inner aggregate and outer accessor:
code: -- NB: THIS IS AN EXAMPLE OF AN API WE DECIDED NOT TO USE, IT DOES NOT WORK SELECT time_weighted_average('LOCF', value) as time_weighted_average FROM foo; -- or SELECT approx_percentile(0.5, value) as median FROM bar;

Read on for more on why the one-step aggregate approach quickly breaks down as you start doing more complex things (like composing functions into more advanced queries) and how, under the hood, almost all PostgreSQL aggregates do a version of two-step aggregation. You’ll learn how the PostgreSQL implementation inspired us as we built TimescaleDB hyperfunctions, continuous aggregates, and other advanced features – and what this means for developers.

If you’d like to get started with hyperfunctions right away, create your free trial account and start analyzing 🔥. (TimescaleDB hyperfunctions are pre-installed on every Timescale Forge instance, our hosted cloud-native relational time-series data platform).


A primer on PostgreSQL aggregation (through pictures)

When I first started learning about PostgreSQL 5 or 6 years ago (I was an electrochemist, and dealing with lots of battery data, as mentioned in my last post on time-weighted averages), I ran into some performance issues. I was trying to better understand what was going on inside the database in order to improve its performance – and that’s when I found Bruce Momjian’s talks on PostgreSQL Internals Through Pictures. Bruce is well known in the community for his insightful talks (and his penchant for bow ties), and his sessions were a revelation for me.

They’ve served as a foundation for my understanding of how PostgreSQL works ever since. He explained things so clearly, and I’ve always learned best when I can visualize what’s going on, so the “through pictures” part really helped - and stuck with - me.

So this next bit is my attempt to channel Bruce by explaining some PostgreSQL internals through pictures. Cinch up your bow ties and get ready for some learnin’.

A GIF of the author finishing tying a bow tie and fixing his shirt.
The author pays homage to Bruce Momjian (and looks rather pleased with himself because he’s managed to tie a bow tie on the first try).

PostgreSQL aggregates vs. functions

We have written about how we use custom functions and aggregates to extend SQL, but we haven’t exactly explained the difference between them.

The fundamental difference between an aggregate function and a “regular” function in SQL is that an aggregate produces a single result from a group of related rows, while a regular function produces a result for each row:

A side-by-side diagram depicting an “aggregate” side and a “function” side and how each product results. There are three individual rows on the aggregate side, with arrows that point to a single result; on the function side, there are three individual rows, with arrows that point to three different results (one per row).
In SQL, aggregates produce a result from multiple rows, while functions produce a result per row.


This is not to say that a function can’t have inputs from multiple columns; they just have to come from the same row.

Another way to think about it is that functions often act on rows, whereas aggregates act on columns. To illustrate this, let’s consider a theoretical table ’foo’ with two columns:

CREATE TABLE foo(
    bar DOUBLE PRECISION,
    baz DOUBLE PRECISION);
Enter fullscreen mode Exit fullscreen mode

And just a few values, so we can easily see what’s going on:

INSERT INTO foo(bar, baz) VALUES (1.0, 2.0), (2.0, 4.0), (3.0, 6.0);
Enter fullscreen mode Exit fullscreen mode

The function greatest() will produce the largest of the values in columns bar and baz for each row:

SELECT greatest(bar, baz) FROM foo;

 greatest 
----------
        2
        4
        6
Enter fullscreen mode Exit fullscreen mode

Whereas the aggregate max() will produce the largest value from each column:

SELECT max(bar) as bar_max, max(baz) as baz_max FROM foo;

 bar_max | baz_max 
----|--------
       3 |       6
```



Using the above data, here’s a picture of what happens when we aggregate something: 
<figure>
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hwomr9qpp8o3p461swg4.jpg" alt="A diagram showing how the statement: `SELECT max(bar) FROM foo;` works: multiple rows with values of “bar equal to” 1.0, 2.0, and 3.0, go through the `max(bar)` aggregate to ultimately produce a result of 3.0. " style="width:100%">
<figcaption align = "center">The `max()` aggregate gets the largest value from multiple rows.</figcaption>
</figure>
<p>
The aggregate takes inputs from multiple rows and produces a single result. That’s the main difference between it and a function, but how does it do that? Let’s look at what it’s doing under the hood.

### Aggregate internals: row-by-row
Under the hood, aggregates in PostgreSQL work row-by-row. But, then how does an aggregate know anything about the previous rows? 

Well, an aggregate stores some state about the rows it has previously seen, and as the database sees new rows, it updates that internal state. 

For the `max()` aggregate we’ve been discussing, the internal state is simply the largest value we’ve collected so far. 

Let’s take this step-by-step. 

When we start, our internal state is `NULL` because we haven’t seen any rows yet:

![Flowchart arrow diagram representing the max open parens bar close parens aggregate, with three rows below the arrow where bar is equal to 1.0, 2.0, and 3.0, respectively. There is a box in the arrow in which the state is equal to NULL. ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/edvpx1w1ga0qori6dwul.jpg)

Then, we get our first row in: 
![The same flowchart arrow diagram, except that row one, with bar equal to 1.0, has moved from below the arrow_into_ the arrow. ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a7q3ejwkxieanvy4idem.jpg)

Since our state is `NULL`, we initialize it to the first value we see:
![The same flowchart diagram, except that row one has moved _out_ of the arrow, and the state has been updated from NULL to the 1.0, row one’s value.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6izuxd45sevfvwjvh8ct.jpg)

Now, we get our second row: 
![The same flowchart diagram, except that row two has moved into the arrow representing the max aggregate. ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/37jr6glndj13fdim3ehr.jpg)

And we see that the value of bar (2.0) is greater than our current state (1.0), so we update the state:
![The same diagram, except that row two has moved out of the max aggregate, and the state has been updated to the largest value (the value of row two, 2.0). ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qvr3udolmos6v01wo59e.jpg)

Then, the next row comes into the aggregate:
![The same diagram, except that the row three has moved into the arrow representing the max aggregate. ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7pjbniuy0xrdqsg5ufja.jpg)

We compare it to our current state, take the greatest value, and update our state: 
![The same diagram, expect that row three has moved out of the max aggregate, and the state has been updated to the largest value, the value of the third row, 3.0.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kfkp9q9wf5gtey3k752b.jpg)

Finally, we don’t have any more rows to process, so we output our result:
![The same diagram, now noting that there are “no more rows” to process, and including a final result, 3.0, being output at the end of the arrow.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ye5u1cdnqk3tslrpd225.jpg)

So, to summarize, each row comes in, gets compared to our current state, and then the state gets updated to reflect the new greatest value. Then the next row comes in, and we repeat the process until we’ve processed all our rows and output the result.
<figure>
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hbqf9uvopm9xq9skfwwd.gif" alt="A GIF depicting the previous diagrams, one after the other, as the rows move through the aggregate." style="width:100%">
<figcaption align = "center">The max aggregate aggregation process, told in GIFs.</figcaption>
</figure>
<p>
There’s a name for the function that processes each row and updates the internal state: the **[state transition function](https://www.postgresql.org/docs/current/sql-createaggregate.html)** (or just “transition function” for short.) The transition function for an aggregate takes the current state and the value from the incoming row as arguments and produces a new state. 

It’s defined like this, where `current_value` represents values from the incoming row, `current_state` represents the current aggregate state built up over the previous rows (or NULL if we haven’t yet gotten any), and `next_state` represents the output after analyzing the incoming row:


```SQL
next_state = transition_func(current_state, current_value)
```



### Aggregate internals: composite state
So, the `max()` aggregate has a straightforward state that contains just one value (the largest we’ve seen). But not all aggregates in PostgreSQL have such a simple state. 

Let’s consider the aggregate for average (`avg`):


```SQL
SELECT avg(bar) FROM foo;
```



To refresh, an average is defined as: 


avg(x)=sum(x)count(x)avg(x) = \frac{sum(x)}{count(x)}

To calculate it, we store the sum and the count as our internal state and update our state as we process rows:
<figure>
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pjuxskwee16306gnl48o.gif" alt="A GIF of the aggregation process for the statement </span><span class="k">SELECT</span> <span class="k">avg</span><span class="p">(</span><span class="n">bar</span><span class="p">)</span> <span class="k">FROM</span> <span class="n">foo</span><span class="p">;</span><span class="nv">, with diagrams similar to the previous. Three rows with values of bar equal to 1.0, 2.0, and 3.0 go through the aggregate, and the transition function updates the state, which has two values, each starting NULL, the sum is updated at each step by adding the value of the incoming row, and the count is incremented.

" style="width:100%">
<figcaption align = "center">The </span><span class="k">avg</span><span class="p">()</span><span class="nv"> aggregation process, told in GIFs. For </span><span class="k">avg</span><span class="p">()</span><span class="nv">, the transition function must update a more complex state since the sum and count are stored separately at each aggregation step.</figcaption>
</figure>
<p>
But, when we’re ready to output our result for </span><span class="k">avg</span><span class="nv">, we need to divide </span><span class="k">sum</span><span class="nv"> by </span><span class="k">count</span><span class="nv">:
<figure>
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/to9t3tjqp6bj8rpol45m.jpg" alt="An arrow flowchart diagram similar to those before, showing the end state of the avg aggregate. The rows have moved through the aggregate, and the state is 6.0 - the sum and three - the count. There are then some question marks and an end result of 2.0." style="width:100%">
<figcaption align = "center"> For some aggregates, we can output the state directly – but for others, we need to perform an operation on the state before calculating our final result. </figcaption>
</figure>
<p>
There’s another function inside the aggregate that performs this calculation: the final function. Once we’ve processed all the rows, the final function takes the state and does whatever it needs to produce the result.

It’s defined like this, where </span><span class="n">final_state</span><span class="nv"> represents the output of the transition function after it has processed all the rows:

</span><span class="nv">`</span><span class="k">SQL</span>
<span class="k">result</span> <span class="o">=</span> <span class="n">final_func</span><span class="p">(</span><span class="n">final_state</span><span class="p">)</span>
<span class="nv">`</span><span class="se">

And, through pictures:
<figure>
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f8ecxml9yt9lwxa28dih.gif" alt="A GIF that starts the same as the previous GIF, the avg aggregate state is updated as rows pass through the aggregate. Once all the rows are processed, a final function step divides the final sum - 6.0 by the final count - 3 and outputs the result - 2.0. " style="width:100%">
<figcaption align = "center"> How the average aggregate works, told in GIFs. Here, we’re highlighting the role of the final function. </figcaption>
</figure>
<p>

To summarize: as an aggregate scans over rows, its transition function updates its internal state. Once the aggregate has scanned all of the rows, its final function produces a result, which is returned to the user.

Improving the performance of aggregate functions

One interesting thing to note here: the transition function is called many, many more times than the final function: once for each row, whereas the final function is called once per group of rows.

Now, the transition function isn’t inherently more expensive than the final function on a per-call basis – but because there are usually orders of magnitude more rows going into the aggregate than coming out, the transition function step becomes the most expensive part very quickly. This is especially true when you have high volume time-series data being ingested at high rates; optimizing aggregate transition function calls is important for improving performance.

Luckily, PostgreSQL already has ways to optimize aggregates.

Parallelization and the combine function

Because the transition function is run on each row, some enterprising PostgreSQL developers asked: what if we parallelized the transition function calculation?

Let’s revisit our definitions for transition functions and final functions:

</span><span class="nv">`</span><span class="k">SQL</span>
<span class="n">next_state</span> <span class="o">=</span> <span class="n">transition_func</span><span class="p">(</span><span class="n">current_state</span><span class="p">,</span> <span class="n">current_value</span><span class="p">)</span>
<span class="k">result</span> <span class="o">=</span> <span class="n">final_func</span><span class="p">(</span><span class="n">final_state</span><span class="p">)</span>
<span class="nv">`</span><span class="se">

We can run this in parallel by instantiating multiple copies of the transition function and handing a subset of rows to each instance. Then, each parallel aggregate will run the transition function over the subset of rows it sees, producing multiple (partial) states, one for each parallel aggregate. But, since we need to aggregate over the entire data set, we can’t run the final function on each parallel aggregate separately because they only have some of the rows.

So, now we’ve ended up in a bit of a pickle: we have multiple partial aggregate states, and the final function is only meant to work on the single, final state - right before we output the result to the user.

To solve this problem, we need a new type of function that takes two partial states and combines them into one so that the final function can do its work. This is (aptly) called the combine function.

We can run the combine function iteratively over all of the partial states that are created when we parallelize the aggregate.

</span><span class="nv">`</span><span class="k">SQL</span>
<span class="n">combined_state</span> <span class="o">=</span> <span class="n">combine_func</span><span class="p">(</span><span class="n">partial_state_1</span><span class="p">,</span> <span class="n">partial_state_2</span><span class="p">)</span>
<span class="nv">`</span><span class="se">

For instance, in </span><span class="k">avg</span><span class="nv">, the combine function will add up the counts and sums.
<figure>
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ei54hrl4gy30hajmboys.gif" alt="A GIF that starts the same as the previous GIF, the avg aggregate state is updated as rows pass through the aggregate. Once all the rows are processed, a final function step divides the final sum - 6.0 by the final count - 3 and outputs the result - 2.0. " style="width:100%">
<figcaption align = "center"> How parallel aggregation works, told in GIFs. Here, we’re highlighting the combine function (We’ve added a couple more rows to illustrate parallel aggregation.)
</figcaption>
</figure>
<p>

Then, after we have the combined state from all of our parallel aggregates, we run the final function and get our result.

Deduplication <a name="deduplication"></a>

Parallelization and the combine function are one way to reduce the cost of calling an aggregate, but it’s not the only way.

One other built-in PostgreSQL optimization that reduces an aggregate’s cost occurs in a statement like this:

</span><span class="nv">`</span><span class="k">SQL</span>
<span class="k">SELECT</span> <span class="k">avg</span><span class="p">(</span><span class="n">bar</span><span class="p">),</span> <span class="k">avg</span><span class="p">(</span><span class="n">bar</span><span class="p">)</span> <span class="o">/</span> <span class="mi">2</span> <span class="k">AS</span> <span class="n">half_avg</span> <span class="k">FROM</span> <span class="n">foo</span><span class="p">;</span>
<span class="nv">`</span><span class="se">

PostgreSQL will optimize this statement to evaluate the </span><span class="k">avg</span><span class="p">(</span><span class="n">bar</span><span class="p">)</span><span class="nv"> calculation only once and then use that result twice.

And, if we have different aggregates with the same transition function but different final functions? PostgreSQL further optimizes by calling the transition function (the expensive part) on all the rows and then doing both final functions! Pretty neat!

Now, that’s not all that PostgreSQL aggregates can do, but it’s a pretty good tour, and it’s enough to get us where we need to go today.


Two-step aggregation in TimescaleDB hyperfunctions<a name="two-step-in-tsdb"></a>

In TimescaleDB, we’ve implemented the two-step aggregation design pattern for our aggregate functions. This generalizes the PostgreSQL internal aggregation API and exposes it to the user via our aggregates, accessors, and rollup functions. (In other words, each of the internal PostgreSQL functions has an equivalent function in TimescaleDB hyperfunctions.)

As a refresher, when we talk about the two-step aggregation design pattern, we mean the following convention, where we have an inner aggregate call:
The same as the previous in terms of code, except the sections: time_weight('LOCF', value) and percentile_agg(value) are highlighted

And an outer accessor call:
The same as the previous in terms of code, except the sections: average(time_weight('LOCF', value)) and approx_percentile(0.5, percentile_agg(value)) are highlighted

The inner aggregate call returns the internal state, just like the transition function does in PostgreSQL aggregates.

The outer accessor call takes the internal state and returns a result to the user, just like the final function does in PostgreSQL.

We also have special </span><span class="k">rollup</span><span class="nv"> functions defined for each of our aggregates that work much like PostgreSQL combine functions.
<figure>
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fgz5f9gw8cbanb0an8wg.jpg" alt="A table with columns labeled: the PostgreSQL internal aggregation API, Two-step aggregate equivalent, and TimescaleDB hyperfunction example. In the first row, we have the transition function equivalent to the aggregate, and the examples are time_weight() and percentile_agg(). In the second row, we have the final function, equivalent to the accessor, and the examples are average() and approx_percentile(). In the third row, we have the combine function equivalent to rollup in two-step aggregates, and the example is rollup().">
<figcaption align = "center">PostgreSQL internal aggregation APIs and their TimescaleDB hyperfunctions’ equivalent</figcaption>
</figure>
<p>


Why we use the two-step aggregate design pattern<a name="3"></a>

There are four basic reasons we expose the two-step aggregate design pattern to users rather than leave it as an internal structure:

  1. Allow multi-parameter aggregates to re-use state, making them more efficient
  2. Cleanly distinguish between parameters that affect aggregates vs. accessors, making performance implications easier to understand and predict
  3. Enable easy to understand rollups, with logically consistent results, in continuous aggregates and window functions (one of our most common requests on continuous aggregates)
  4. Allow easier retrospective analysis of downsampled data in continuous aggregates as requirements change, but the data is already gone.

That’s a little theoretical, so let’s dive in and explain each one.

Re-using state

PostgreSQL is very good at optimizing statements (as we saw earlier in this post, through pictures 🙌), but you have to give it things in a way it can understand.

For instance, when we talked about deduplication, we saw that PostgreSQL could “figure out” when a statement occurs more than once in a query (i.e., </span><span class="k">avg</span><span class="p">(</span><span class="n">bar</span><span class="p">)</span><span class="nv">) and only run the statement a single time to avoid redundant work:

</span><span class="nv">`</span><span class="k">SQL</span>
<span class="k">SELECT</span> <span class="k">avg</span><span class="p">(</span><span class="n">bar</span><span class="p">),</span> <span class="k">avg</span><span class="p">(</span><span class="n">bar</span><span class="p">)</span> <span class="o">/</span> <span class="mi">2</span> <span class="k">AS</span> <span class="n">half_avg</span> <span class="k">FROM</span> <span class="n">foo</span><span class="p">;</span>
<span class="nv">`</span><span class="se">

This works because the </span><span class="k">avg</span><span class="p">(</span><span class="n">bar</span><span class="p">)</span><span class="nv"> occurs multiple times without variation.

However, if I write the equation in a slightly different way and move the division inside the parentheses so that the expression </span><span class="k">avg</span><span class="p">(</span><span class="n">bar</span><span class="p">)</span><span class="nv"> doesn’t repeat so neatly, PostgreSQL can’t figure out how to optimize it:

</span><span class="nv">`</span><span class="k">SQL</span>
<span class="k">SELECT</span> <span class="k">avg</span><span class="p">(</span><span class="n">bar</span><span class="p">),</span> <span class="k">avg</span><span class="p">(</span><span class="n">bar</span> <span class="o">/</span> <span class="mi">2</span><span class="p">)</span> <span class="k">AS</span> <span class="n">half_avg</span> <span class="k">FROM</span> <span class="n">foo</span><span class="p">;</span>
<span class="nv">`</span><span class="se">

It doesn’t know that the division is commutative, or that those two queries are equivalent.

This is a complicated problem for database developers to solve, and thus, as a PostgreSQL user, you need to make sure to write your query in a way that the database can understand.

Performance problems caused by equivalent statements that the database doesn’t understand are equal (or that are equal in the specific case you wrote, but not in the general case) can be some of the trickiest SQL optimizations to figure out as a user.

Therefore, when we design our APIs, we try to make it hard for users to unintentionally write low-performance code: in other words, the default option should be the high-performance option.

For the next bit, it’ll be useful to have a simple table defined as:

</span><span class="nv">`</span><span class="k">SQL</span>
<span class="k">CREATE</span> <span class="k">TABLE</span> <span class="n">foo</span><span class="p">(</span>
<span class="n">ts</span> <span class="n">timestamptz</span><span class="p">,</span>
<span class="n">val</span> <span class="nb">DOUBLE</span> <span class="nb">PRECISION</span><span class="p">);</span>
<span class="nv">`</span><span class="se">

Let’s look at an example of how we use two-step aggregation in the percentile approximation hyperfunction to allow PostgreSQL to optimize performance.

</span><span class="nv">`</span><span class="k">SQL</span>
<span class="k">SELECT</span>
<span class="n">approx_percentile</span><span class="p">(</span><span class="mi">0</span><span class="p">.</span><span class="mi">1</span><span class="p">,</span> <span class="n">percentile_agg</span><span class="p">(</span><span class="n">val</span><span class="p">))</span> <span class="k">as</span> <span class="n">p10</span><span class="p">,</span>
<span class="n">approx_percentile</span><span class="p">(</span><span class="mi">0</span><span class="p">.</span><span class="mi">5</span><span class="p">,</span> <span class="n">percentile_agg</span><span class="p">(</span><span class="n">val</span><span class="p">))</span> <span class="k">as</span> <span class="n">p50</span><span class="p">,</span>
<span class="n">approx_percentile</span><span class="p">(</span><span class="mi">0</span><span class="p">.</span><span class="mi">9</span><span class="p">,</span> <span class="n">percentile_agg</span><span class="p">(</span><span class="n">val</span><span class="p">))</span> <span class="k">as</span> <span class="n">p90</span>
<span class="k">FROM</span> <span class="n">foo</span><span class="p">;</span>
<span class="nv">`</span><span class="se">

...is treated as the same as:

</span><span class="nv">`</span><span class="k">SQL</span>
<span class="k">SELECT</span>
<span class="n">approx_percentile</span><span class="p">(</span><span class="mi">0</span><span class="p">.</span><span class="mi">1</span><span class="p">,</span> <span class="n">pct_agg</span><span class="p">)</span> <span class="k">as</span> <span class="n">p10</span><span class="p">,</span>
<span class="n">approx_percentile</span><span class="p">(</span><span class="mi">0</span><span class="p">.</span><span class="mi">5</span><span class="p">,</span> <span class="n">pct_agg</span><span class="p">)</span> <span class="k">as</span> <span class="n">p50</span><span class="p">,</span>
<span class="n">approx_percentile</span><span class="p">(</span><span class="mi">0</span><span class="p">.</span><span class="mi">9</span><span class="p">,</span> <span class="n">pct_agg</span><span class="p">)</span> <span class="k">as</span> <span class="n">p90</span>
<span class="k">FROM</span>
<span class="p">(</span><span class="k">SELECT</span> <span class="n">percentile_agg</span><span class="p">(</span><span class="n">val</span><span class="p">)</span> <span class="k">as</span> <span class="n">pct_agg</span> <span class="k">FROM</span> <span class="n">foo</span><span class="p">)</span> <span class="n">pct</span><span class="p">;</span>
<span class="nv">`</span><span class="se">

This calling convention allows us to use identical aggregates so that, under the hood, PostgreSQL can deduplicate calls to the identical aggregates (and is faster as a result).

Now, let’s compare this to the one-step aggregate approach.

PostgreSQL can’t deduplicate aggregate calls here because the extra parameter in the </span><span class="n">approx_percentile</span><span class="nv"> aggregate changes with each call:

</span><span class="nv">`</span><span class="k">SQL</span>
<span class="c1">-- NB: THIS IS AN EXAMPLE OF AN API WE DECIDED NOT TO USE, IT DOES NOT WORK</span>
<span class="k">SELECT</span>
<span class="n">approx_percentile</span><span class="p">(</span><span class="mi">0</span><span class="p">.</span><span class="mi">1</span><span class="p">,</span> <span class="n">val</span><span class="p">)</span> <span class="k">as</span> <span class="n">p10</span><span class="p">,</span>
<span class="n">approx_percentile</span><span class="p">(</span><span class="mi">0</span><span class="p">.</span><span class="mi">5</span><span class="p">,</span> <span class="n">val</span><span class="p">)</span> <span class="k">as</span> <span class="n">p50</span><span class="p">,</span>
<span class="n">approx_percentile</span><span class="p">(</span><span class="mi">0</span><span class="p">.</span><span class="mi">9</span><span class="p">,</span> <span class="n">val</span><span class="p">)</span> <span class="k">as</span> <span class="n">p90</span>
<span class="k">FROM</span> <span class="n">foo</span><span class="p">;</span>
<span class="nv">`</span><span class="se">

So, even though all of those functions could use the same approximation built up over all the rows, PostgreSQL has no way of knowing that. The two-step aggregation approach enables us to structure our calls so that PostgreSQL can optimize our code, and it enables developers to understand when things will be more expensive and when they won't. Multiple different aggregates with different inputs will be expensive, whereas multiple accessors to the same aggregate will be much less expensive.

Cleanly distinguishing between aggregate/accessor parameters

We also chose the two-step aggregate approach because some of our aggregates can take multiple parameters or options themselves, and their accessors can also take options:

</span><span class="nv">`</span><span class="k">SQL</span>
<span class="k">SELECT</span>
<span class="n">approx_percentile</span><span class="p">(</span><span class="mi">0</span><span class="p">.</span><span class="mi">5</span><span class="p">,</span> <span class="n">uddsketch</span><span class="p">(</span><span class="mi">1000</span><span class="p">,</span> <span class="mi">0</span><span class="p">.</span><span class="mi">001</span><span class="p">,</span> <span class="n">val</span><span class="p">))</span> <span class="k">as</span> <span class="n">median</span><span class="p">,</span><span class="c1">--1000 buckets, 0.001 target err</span>
<span class="n">approx_percentile</span><span class="p">(</span><span class="mi">0</span><span class="p">.</span><span class="mi">9</span><span class="p">,</span> <span class="n">uddsketch</span><span class="p">(</span><span class="mi">1000</span><span class="p">,</span> <span class="mi">0</span><span class="p">.</span><span class="mi">001</span><span class="p">,</span> <span class="n">val</span><span class="p">))</span> <span class="k">as</span> <span class="n">p90</span><span class="p">,</span>
<span class="n">approx_percentile</span><span class="p">(</span><span class="mi">0</span><span class="p">.</span><span class="mi">5</span><span class="p">,</span> <span class="n">uddsketch</span><span class="p">(</span><span class="mi">100</span><span class="p">,</span> <span class="mi">0</span><span class="p">.</span><span class="mi">01</span><span class="p">,</span> <span class="n">val</span><span class="p">))</span> <span class="k">as</span> <span class="n">less_accurate_median</span> <span class="c1">-- modify the terms for the aggregate get a new approximation</span>
<span class="k">FROM</span> <span class="n">foo</span><span class="p">;</span>
<span class="nv">`</span><span class="se">

That’s an example of </span><span class="n">uddsketch</span><span class="nv">, an advanced aggregation method for percentile approximation that can take its own parameters.

Imagine if the parameters were jumbled together in one aggregate:

</span><span class="nv">`</span><span class="k">SQL</span>
<span class="c1">-- NB: THIS IS AN EXAMPLE OF AN API WE DECIDED NOT TO USE, IT DOES NOT WORK</span>
<span class="k">SELECT</span>
<span class="n">approx_percentile</span><span class="p">(</span><span class="mi">0</span><span class="p">.</span><span class="mi">5</span><span class="p">,</span> <span class="mi">1000</span><span class="p">,</span> <span class="mi">0</span><span class="p">.</span><span class="mi">001</span><span class="p">,</span> <span class="n">val</span><span class="p">)</span> <span class="k">as</span> <span class="n">median</span>
<span class="k">FROM</span> <span class="n">foo</span><span class="p">;</span>
<span class="nv">`</span><span class="se">

It’d be pretty difficult to understand which argument is related to which part of the functionality.

Conversely, the two-step approach separates the arguments to the accessor vs. aggregate very cleanly, where the aggregate function is defined in parenthesis within the inputs of our final function:

</span><span class="nv">`</span><span class="k">SQL</span>
<span class="k">SELECT</span>
<span class="n">approx_percentile</span><span class="p">(</span><span class="mi">0</span><span class="p">.</span><span class="mi">5</span><span class="p">,</span> <span class="n">uddsketch</span><span class="p">(</span><span class="mi">1000</span><span class="p">,</span> <span class="mi">0</span><span class="p">.</span><span class="mi">001</span><span class="p">,</span> <span class="n">val</span><span class="p">))</span> <span class="k">as</span> <span class="n">median</span>
<span class="k">FROM</span> <span class="n">foo</span><span class="p">;</span>
<span class="nv">`</span><span class="se">

By making it clear which is which, users can know that if they change the inputs to the aggregate, they will get more (costly) aggregate nodes, =while inputs to the accessor are cheaper to change.

So, those are the first two reasons we expose the API - and what it allows developers to do as a result. The last two reasons involve continuous aggregates and how they relate to hyperfunctions, so first, a quick refresher on what they are.


Two-step aggregation + continuous aggregates in TimescaleDB<a name="4"></a>

TimescaleDB includes a feature called continuous aggregates, which are designed to make queries on very large datasets run faster. TimescaleDB continuous aggregates continuously and incrementally store the results of an aggregation query in the background, so when you run the query, only the data that has changed needs to be computed, not the entire dataset.

In our discussion of the combine function above, we covered how you could take the expensive work of computing the transition function over every row and split the rows over multiple parallel aggregates to speed up the calculation.

TimescaleDB continuous aggregates do something similar, except they spread the computation work over time rather than between parallel processes running simultaneously. The continuous aggregate computes the transition function over a subset of rows inserted some time in the past, stores the result, and then, at query time, we only need to compute over the raw data for a small section of recent time that we haven’t yet calculated.

When we designed TimescaleDB hyperfunctions, we wanted them to work well within continuous aggregates and even open new possibilities for users.

Let’s say I create a continuous aggregate from the simple table above to compute the sum, average, and percentile (the latter using a hyperfunction) in 15 minute increments:

</span><span class="nv">`</span><span class="k">SQL</span>
<span class="k">CREATE</span> <span class="n">MATERIALIZED</span> <span class="k">VIEW</span> <span class="n">foo_15_min_agg</span>
<span class="k">WITH</span> <span class="p">(</span><span class="n">timescaledb</span><span class="p">.</span><span class="n">continuous</span><span class="p">)</span>
<span class="k">AS</span> <span class="k">SELECT</span> <span class="n">id</span><span class="p">,</span>
<span class="n">time_bucket</span><span class="p">(</span><span class="s1">'15 min'</span><span class="p">::</span><span class="n">interval</span><span class="p">,</span> <span class="n">ts</span><span class="p">)</span> <span class="k">as</span> <span class="n">bucket</span><span class="p">,</span>
<span class="k">sum</span><span class="p">(</span><span class="n">val</span><span class="p">),</span>
<span class="k">avg</span><span class="p">(</span><span class="n">val</span><span class="p">),</span>
<span class="n">percentile_agg</span><span class="p">(</span><span class="n">val</span><span class="p">)</span>
<span class="k">FROM</span> <span class="n">foo</span>
<span class="k">GROUP</span> <span class="k">BY</span> <span class="n">id</span><span class="p">,</span> <span class="n">time_bucket</span><span class="p">(</span><span class="s1">'15 min'</span><span class="p">::</span><span class="n">interval</span><span class="p">,</span> <span class="n">ts</span><span class="p">);</span>
<span class="nv">`</span><span class="se">

And then what if I come back and I want to re-aggregate it to hours or days, rather than 15-minute buckets – or need to aggregate my data across all ids? Which aggregates can I do that for, and which can’t I?

Logically consistent rollups

One of the problems we wanted to solve with two-step aggregation was how to convey to the user when it is “okay” to re-aggregate and when it’s not. (By “okay,” I mean you would get the same result from the re-aggregated data as you would running the aggregate on the raw data directly.)

For instance:

</span><span class="nv">`</span><span class="k">SQL</span>
<span class="k">SELECT</span> <span class="k">sum</span><span class="p">(</span><span class="n">val</span><span class="p">)</span> <span class="k">FROM</span> <span class="n">tab</span><span class="p">;</span>
<span class="c1">-- is equivalent to:</span>
<span class="k">SELECT</span> <span class="k">sum</span><span class="p">(</span><span class="k">sum</span><span class="p">)</span>
<span class="k">FROM</span>
<span class="p">(</span><span class="k">SELECT</span> <span class="n">id</span><span class="p">,</span> <span class="k">sum</span><span class="p">(</span><span class="n">val</span><span class="p">)</span>
<span class="k">FROM</span> <span class="n">tab</span>
<span class="k">GROUP</span> <span class="k">BY</span> <span class="n">id</span><span class="p">)</span> <span class="n">s</span><span class="p">;</span>
<span class="nv">`</span><span class="se">

But:

</span><span class="nv">`</span><span class="k">SQL</span>
<span class="k">SELECT</span> <span class="k">avg</span><span class="p">(</span><span class="n">val</span><span class="p">)</span> <span class="k">FROM</span> <span class="n">tab</span><span class="p">;</span>
<span class="c1">-- is NOT equivalent to:</span>
<span class="k">SELECT</span> <span class="k">avg</span><span class="p">(</span><span class="k">avg</span><span class="p">)</span>
<span class="k">FROM</span>
<span class="p">(</span><span class="k">SELECT</span> <span class="n">id</span><span class="p">,</span> <span class="k">avg</span><span class="p">(</span><span class="n">val</span><span class="p">)</span>
<span class="k">FROM</span> <span class="n">tab</span>
<span class="k">GROUP</span> <span class="k">BY</span> <span class="n">id</span><span class="p">)</span> <span class="n">s</span><span class="p">;</span>
<span class="nv">`</span><span class="se">

Why is re-aggregation okay for </span><span class="k">sum</span><span class="nv"> but not for </span><span class="k">avg</span><span class="nv">?

Technically, it’s logically consistent to re-aggregate when:

  • The aggregate returns the internal aggregate state. The internal aggregate state for sum is </span><span class="p">(</span><span class="k">sum</span><span class="p">)</span><span class="nv">, whereas for average, it is</span><span class="p">(</span><span class="k">sum</span><span class="p">,</span> <span class="k">count</span><span class="p">)</span><span class="nv">.
  • The aggregate’s combine and transition functions are equivalent. For </span><span class="k">sum</span><span class="p">()</span><span class="nv">, the states and the operations are the same. For </span><span class="k">count</span><span class="p">()</span><span class="nv">, the states are the same, but the transition and combine functions perform different operations on them. </span><span class="k">sum</span><span class="p">()</span><span class="nv">’s transition function adds the incoming value to the state, and its combine function adds two states together, or a sum of sums. Conversely, </span><span class="k">count</span><span class="p">()</span><span class="nv">s transition function increments the state for each incoming value, but its combine function adds two states together, or a sum of counts.

But, you have to have in-depth (and sometimes rather arcane) knowledge about each aggregate’s internals to know which ones meet the above criteria – and therefore, which ones you can re-aggregate.

With the two-step aggregate approach, we can convey when it is logically consistent to re-aggregate by exposing our equivalent of the combine function when the aggregate allows it.

We call that function </span><span class="k">rollup</span><span class="p">()</span><span class="nv">. </span><span class="k">Rollup</span><span class="p">()</span><span class="nv"> takes multiple inputs from the aggregate and combines them into a single value.

All of our aggregates that can be combined have </span><span class="k">rollup</span><span class="nv"> functions that will combine the output of the aggregate from two different groups of rows. (Technically, </span><span class="k">rollup</span><span class="p">()</span><span class="nv"> is an aggregate function because it acts on multiple rows. For clarity, I’ll call them rollup functions to distinguish them from the base aggregate). Then you can call the accessor on the combined output!

So using that continuous aggregate we created to get a 1 day re-aggregation of our </span><span class="n">percentile_agg</span><span class="nv"> becomes as simple as:

</span><span class="nv">`</span><span class="k">SQL</span>
<span class="k">SELECT</span> <span class="n">id</span><span class="p">,</span>
<span class="n">time_bucket</span><span class="p">(</span><span class="s1">'1 day'</span><span class="p">::</span><span class="n">interval</span><span class="p">,</span> <span class="n">bucket</span><span class="p">)</span> <span class="k">as</span> <span class="n">bucket</span><span class="p">,</span>
<span class="n">approx_percentile</span><span class="p">(</span><span class="mi">0</span><span class="p">.</span><span class="mi">5</span><span class="p">,</span> <span class="k">rollup</span><span class="p">(</span><span class="n">percentile_agg</span><span class="p">))</span> <span class="k">as</span> <span class="n">median</span>
<span class="k">FROM</span> <span class="n">foo_15_min_agg</span>
<span class="k">GROUP</span> <span class="k">BY</span> <span class="n">id</span><span class="p">,</span> <span class="n">time_bucket</span><span class="p">(</span><span class="s1">'1 day'</span><span class="p">::</span><span class="n">interval</span><span class="p">,</span> <span class="n">bucket</span><span class="p">);</span>
<span class="nv">`</span><span class="se">

(We actually suggest that you create your continuous aggregates without calling the accessor function for this very reason. Then, you can just create views over top or put the accessor call in your query).

This brings us to our final reason.

Retrospective analysis using continuous aggregates

When we create a continuous aggregate, we’re defining a view of our data that we then could be stuck with for a very long time.

For example, we might have a data retention policy that deletes the underlying data after X time period. If we want to go back and re-calculate anything, it can be challenging, if not impossible, since we’ve “dropped” the data.

But, we understand that in the real world, you don’t always know what you’re going to need to analyze ahead of time.

Thus, we designed hyperfunctions to use the two-step aggregate approach, so they would better integrate with continuous aggregates. As a result, users store the aggregate state in the continuous aggregate view and modify accessor functions without requiring them to recalculate old states that might be difficult (or impossible) to reconstruct (because the data is archived, deleted, etc.).

The two-step aggregation design also allows for much greater flexibility with continuous aggregates. For instance, let’s take a continuous aggregate where we do the aggregate part of the two-step aggregation like this:

</span><span class="nv">`</span><span class="k">SQL</span>
<span class="k">CREATE</span> <span class="n">MATERIALIZED</span> <span class="k">VIEW</span> <span class="n">foo_15_min_agg</span>
<span class="k">WITH</span> <span class="p">(</span><span class="n">timescaledb</span><span class="p">.</span><span class="n">continuous</span><span class="p">)</span>
<span class="k">AS</span> <span class="k">SELECT</span> <span class="n">id</span><span class="p">,</span>
<span class="n">time_bucket</span><span class="p">(</span><span class="s1">'15 min'</span><span class="p">::</span><span class="n">interval</span><span class="p">,</span> <span class="n">ts</span><span class="p">)</span> <span class="k">as</span> <span class="n">bucket</span><span class="p">,</span>
<span class="n">percentile_agg</span><span class="p">(</span><span class="n">val</span><span class="p">)</span>
<span class="k">FROM</span> <span class="n">foo</span>
<span class="k">GROUP</span> <span class="k">BY</span> <span class="n">id</span><span class="p">,</span> <span class="n">time_bucket</span><span class="p">(</span><span class="s1">'15 min'</span><span class="p">::</span><span class="n">interval</span><span class="p">,</span> <span class="n">ts</span><span class="p">);</span>
<span class="nv">`</span><span class="se">

When we first create the aggregate, we might only want to get the median:

</span><span class="nv">`</span><span class="k">SQL</span>
<span class="k">SELECT</span>
<span class="n">approx_percentile</span><span class="p">(</span><span class="mi">0</span><span class="p">.</span><span class="mi">5</span><span class="p">,</span> <span class="n">percentile_agg</span><span class="p">)</span> <span class="k">as</span> <span class="n">median</span>
<span class="k">FROM</span> <span class="n">foo_15_min_agg</span><span class="p">;</span>
<span class="nv">`</span><span class="se">

But then, later, we decide we want to know the 95th percentile as well.

Luckily, we don’t have to modify the continuous aggregate; we just modify the parameters to the accessor function in our original query to return the data we want from the aggregate state:

</span><span class="nv">`</span><span class="k">SQL</span>
<span class="k">SELECT</span>
<span class="n">approx_percentile</span><span class="p">(</span><span class="mi">0</span><span class="p">.</span><span class="mi">5</span><span class="p">,</span> <span class="n">percentile_agg</span><span class="p">)</span> <span class="k">as</span> <span class="n">median</span><span class="p">,</span>
<span class="n">approx_percentile</span><span class="p">(</span><span class="mi">0</span><span class="p">.</span><span class="mi">95</span><span class="p">,</span> <span class="n">percentile_agg</span><span class="p">)</span> <span class="k">as</span> <span class="n">p95</span>
<span class="k">FROM</span> <span class="n">foo_15_min_agg</span><span class="p">;</span>
<span class="nv">`</span><span class="se">

And then, if a year later, we want the 99th percentile as well, we can do that too:

</span><span class="nv">`</span><span class="k">SQL</span>
<span class="k">SELECT</span>
<span class="n">approx_percentile</span><span class="p">(</span><span class="mi">0</span><span class="p">.</span><span class="mi">5</span><span class="p">,</span> <span class="n">percentile_agg</span><span class="p">)</span> <span class="k">as</span> <span class="n">median</span><span class="p">,</span>
<span class="n">approx_percentile</span><span class="p">(</span><span class="mi">0</span><span class="p">.</span><span class="mi">95</span><span class="p">,</span> <span class="n">percentile_agg</span><span class="p">)</span> <span class="k">as</span> <span class="n">p95</span><span class="p">,</span>
<span class="n">approx_percentile</span><span class="p">(</span><span class="mi">0</span><span class="p">.</span><span class="mi">99</span><span class="p">,</span> <span class="n">percentile_agg</span><span class="p">)</span> <span class="k">as</span> <span class="n">p99</span>
<span class="k">FROM</span> <span class="n">foo_15_min_agg</span><span class="p">;</span>
<span class="nv">`</span><span class="se">

That’s just scratching the surface. Ultimately, our goal is to provide a high level of developer productivity that enhances other PostgreSQL and TimescaleDB features, like aggregate deduplication and continuous aggregates.


An example of how the two-step aggregate design impacts hyperfunctions’ code<a name="5"></a>

To illustrate how the two-step aggregate design pattern impacts how we think about and code hyperfunctions, let’s look at the time-weighted average family of functions. (Our what time-weighted averages are and why you should care post provides a lot of context for this next bit, so if you haven’t read it, we recommend doing so. You can also skip this next bit for now.)

The equation for the time-weighted average is as follows:

timeweightedaverage=areaundercurveΔTtime\\_weighted\\_average = \frac{area\\_under\\_curve}{ \Delta T}

As we noted in the table above:

  • </span><span class="n">time_weight</span><span class="p">()</span><span class="nv"> is TimescaleDB hyperfunctions’ aggregate and corresponds to the transition function in PostgreSQL’s internal API.
  • </span><span class="n">average</span><span class="p">()</span><span class="nv"> is the accessor, which corresponds to the PostgreSQL final function.
  • </span><span class="k">rollup</span><span class="p">()</span><span class="nv"> for re-aggregation corresponds to the PostgreSQL combine function.

The </span><span class="n">time_weight</span><span class="p">()</span><span class="nv"> function returns an aggregate type that has to be usable by the other functions in the family.

In this case, we decided on a </span><span class="n">TimeWeightSummary</span><span class="nv"> type that is defined like so (in pseudocode):

</span><span class="nv">`</span><span class="k">SQL</span>
<span class="n">TimeWeightSummary</span> <span class="o">=</span> <span class="p">(</span><span class="n">w_sum</span><span class="p">,</span> <span class="n">first_pt</span><span class="p">,</span> <span class="n">last_pt</span><span class="p">)</span>
<span class="nv">`</span><span class="se">

</span><span class="n">w_sum</span><span class="nv"> is the weighted sum (another name for the area under the curve), and </span><span class="n">first_pt</span><span class="nv"> and </span><span class="n">last_pt</span><span class="nv"> are the first and last (time, value) pairs in the rows that feed into the </span><span class="n">time_weight</span><span class="p">()</span><span class="nv"> aggregate.

Here’s a graphic depiction of those elements, which builds on our how to derive a time-weighted average theoretical description:
<figure>
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yt3d7d06r1fajo7iiued.jpg" alt="A graph showing value on the y-axis and time on the x-axis. There are four points: open parens t 1 comma v 1 close parens, labeled first point to open parens t 4 comma v 4 close parens, labeled last point. The points are spaced unevenly in time on the graph. The area under the graph is shaded, and labeled w underscore sum. The time axis has a brace describing the total distance between the first and last points labeled Delta T. ">
<figcaption align = "center">Depiction of the values we store in the </span><span class="n">TimeWeightSummary</span><span class="nv"> representation. </figcaption>
</figure>
<p>

So, the </span><span class="n">time_weight</span><span class="p">()</span><span class="nv"> aggregate does all of the calculations as it receives each of the points in our graph and builds a weighted sum for the time period (ΔT) between the first and last points it “sees.” It then outputs the </span><span class="n">TimeWeightSummary</span><span class="nv">.

The </span><span class="n">average</span><span class="p">()</span><span class="nv"> accessor function performs simple calculations to return the time-weighted average from the </span><span class="n">TimeWeightSummary</span><span class="nv"> (in pseudocode where </span><span class="n">pt</span><span class="p">.</span><span class="nb">time</span><span class="p">()</span><span class="nv"> returns the time from the point):

</span><span class="nv">`</span><span class="k">SQL</span>
<span class="n">func</span> <span class="n">average</span><span class="p">(</span><span class="n">TimeWeightSummary</span> <span class="n">tws</span><span class="p">)</span>
<span class="o">-&gt;</span> <span class="nb">float</span> <span class="p">{</span>
<span class="n">delta_t</span> <span class="o">=</span> <span class="n">tws</span><span class="p">.</span><span class="n">last_pt</span><span class="p">.</span><span class="nb">time</span> <span class="o">-</span> <span class="n">tws</span><span class="p">.</span><span class="n">first_pt</span><span class="p">.</span><span class="nb">time</span><span class="p">;</span>
<span class="n">time_weighted_average</span> <span class="o">=</span> <span class="n">tws</span><span class="p">.</span><span class="n">w_sum</span> <span class="o">/</span> <span class="n">delta_t</span><span class="p">;</span>
<span class="k">return</span> <span class="n">time_weighted_average</span><span class="p">;</span>
<span class="p">}</span>
<span class="nv">`</span><span class="se">

But, as we built the </span><span class="n">time_weight</span><span class="nv"> hyperfunction, ensuring the </span><span class="k">rollup</span><span class="p">()</span><span class="nv"> function worked as expected was a little more difficult – and introduced constraints that impacted the design of our </span><span class="n">TimeWeightSummary</span><span class="nv"> data type.

To understand the rollup function, let’s use our graphical example and imagine the </span><span class="n">time_weight</span><span class="p">()</span><span class="nv"> function returns two </span><span class="n">TimeWeightSummaries</span><span class="nv"> from different regions of time like so:
<figure>
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wdesd8okhaobc9lk1imk.jpg" alt="A similar graph to the previous, except that now there are two sets of shaded regions. The first is similar to the previous and is labeled with first sub 1 open parens t 1 comma v 1 close parens, last 1 open parens t 4 comma v 4 close parens , and w underscore sum 1. The second is similar, with points first 2 open parens t 5 comma v 4 close parens and last 2 open parens t 8 comma v 8 close parens and the label w underscore sum 2 on the shaded portion.">
<figcaption align = "center">What happens when we have multiple </span><span class="n">TimeWeightSummaries</span><span class="nv"> representing different regions of the graph. </figcaption>
</figure>
<p>
The </span><span class="k">rollup</span><span class="p">()</span><span class="nv"> function needs to take in and return the same </span><span class="n">TimeWeightSummary</span><span class="nv"> data type so that our </span><span class="n">average</span><span class="p">()</span><span class="nv"> accessor can understand it. (This mirrors how PostgreSQL’s combine function takes in two states from the transition function and then returns a single state for the final function to process).

We also want the </span><span class="k">rollup</span><span class="p">()</span><span class="nv"> output to be the same as if we had computed the </span><span class="n">time_weight</span><span class="p">()</span><span class="nv"> over all the underlying data. The output should be a </span><span class="n">TimeWeightSummary</span><span class="nv"> representing the full region.

The </span><span class="n">TimeWeightSummary</span><span class="nv"> we output should also account for the area in the gap between these two weighted sum states:
<figure>
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/en9qxli39gr6khvcopkl.jpg" alt="A similar picture to the previous, with the area between the points open parens t 4 comma v 4 close parens aka last 1 and open parens t 5 comma v 5 close parens aka first 2, down to the time axis highlighted. This is called w underscore sum gap. ">
<figcaption align = "center">Mind the gap! (between one </span><span class="n">TimeWeightSummary</span><span class="nv"> and the next).</figcaption>
</figure>
<p>
The gap area is easy to get because we have the last_1 and first_2 points - and it’s the same as the </span><span class="n">w_sum</span><span class="nv"> we’d get by running the</span><span class="n">time_weight</span><span class="p">()</span><span class="nv"> aggregate on them.

Thus, the overall </span><span class="k">rollup</span><span class="p">()</span><span class="nv"> function needs to do something like this (where </span><span class="n">w_sum</span><span class="p">()</span><span class="nv"> extracts the weighted sum from the </span><span class="n">TimeWeightSummary</span><span class="nv">):

</span><span class="nv">`</span><span class="k">SQL</span>
<span class="n">func</span> <span class="k">rollup</span><span class="p">(</span><span class="n">TimeWeightSummary</span> <span class="n">tws1</span><span class="p">,</span> <span class="n">TimeWeightSummary</span> <span class="n">tws2</span><span class="p">)</span>
<span class="o">-&gt;</span> <span class="n">TimeWeightSummary</span> <span class="p">{</span>
<span class="n">w_sum_gap</span> <span class="o">=</span> <span class="n">time_weight</span><span class="p">(</span><span class="n">tws1</span><span class="p">.</span><span class="n">last_pt</span><span class="p">,</span> <span class="n">tws2</span><span class="p">.</span><span class="n">first_pt</span><span class="p">).</span><span class="n">w_sum</span><span class="p">;</span>
<span class="n">w_sum_total</span> <span class="o">=</span> <span class="n">w_sum_gap</span> <span class="o">+</span> <span class="n">tws1</span><span class="p">.</span><span class="n">w_sum</span> <span class="o">+</span> <span class="n">tws2</span><span class="p">.</span><span class="n">w_sum</span><span class="p">;</span>
<span class="k">return</span> <span class="n">TimeWeightSummary</span><span class="p">(</span><span class="n">w_sum_total</span><span class="p">,</span> <span class="n">tws1</span><span class="p">.</span><span class="n">first_pt</span><span class="p">,</span> <span class="n">tws2</span><span class="p">.</span><span class="n">last_pt</span><span class="p">);</span>
<span class="p">}</span>
<span class="nv">`</span><span class="se">

Graphically, that means we’d end up with a single </span><span class="n">TimeWeightSummary</span><span class="nv"> representing the whole area:
<figure>
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/czw7y3lgsot8q3sppaa2.jpg" alt="Similar to the previous graphs, except that now there is only one region that has been shaded, the combined area of the w underscore sum 1, w underscore sum 2, and w underscore sum gap has become one area, w underscore sum. Only the overall first open parens t 1 comma v 1 close parens and last open parens t 8 comma v 8 close parens points are shown.">
<figcaption align = "center">The combined </span><span class="n">TimeWeightSummary</span><span class="nv">. </figcaption>
</figure>
<p>
So that’s how the two-step aggregate design approach ends up affecting the real-world implementation of our time-weighted average hyperfunctions. The above explanations are a bit condensed, but they should give you a more concrete look at how </span><span class="n">time_weight</span><span class="p">()</span><span class="nv"> aggregate, </span><span class="n">average</span><span class="p">()</span><span class="nv"> accessor, and </span><span class="k">rollup</span><span class="p">()</span><span class="nv"> functions work.


Summing it up<a name="6"></a>

Now that you’ve gotten a tour of the PostgreSQL aggregate API, how it inspired us to make the TimescaleDB hyperfunctions two-step aggregate API, and a few examples of how this works in practice, we hope you'll try it out yourself and tell us what you think :).

If you’d like to get started with hyperfunctions right away, spin up a fully managed TimescaleDB service and try it for free. Hyperfunctions are pre-loaded on each new database service on Timescale Forge, so after you create a new service, you’re all set to use them!

If you prefer to manage your own database instances, you can download and install the timescaledb_toolkit extension on GitHub, after which you’ll be able to use </span><span class="n">time_weight</span><span class="nv"> and all other hyperfunctions.

If you have questions or comments on this blog post, we’ve started a discussion on our GitHub page, and we’d love to hear from you. (And, if you like what you see, GitHub ⭐ are always welcome and appreciated too!)

We love building in public, and you can view our upcoming roadmap on GitHub for a list of proposed features, features we’re currently implementing, and features available to use today. For reference, the two-step aggregate approach isn’t just used in the stabilized hyperfunctions covered here; it’s also used in many of our experimental features, including:

These features will be stabilized soon, but we’d love your feedback while the APIs are still evolving. What would make them more intuitive? Easier to use? Open an issue or start a discussion!

Enter fullscreen mode Exit fullscreen mode

Top comments (0)