DEV Community

Kasey Speakman
Kasey Speakman

Posted on • Edited on

Breaking Postgres with Too Many Tables

In a previous post, I shared a strategy for a multi-tenant event store. The strategy there is to break up the data by partitioning the events by tenant. As always, the devil is in the details. One thing I later thought to consider was how many tenants this would support. Since each partition is a separate table, I needed to investigate how many tables can be created.

Table Count Limit

Technically Postgres does not have a limit on the number of tables. However, each table is a file on the OS filesystem. And the OS probably has some opinion on how many files is "too many". In particular Linux has an open file limit. You can view it with the command below (in my distro).

cat /proc/sys/fs/file-max

When I checked a standard Amazon Linux EC2 t2.micro instance, this returned 96972. Some quick searching told me the number is calculated based on the amount of RAM. So then I thought to check an RDS Postgres instance. I could not get shell access, so the only way to test it was to create tables until it broke. When I tested on an AWS RDS Postgres 11 Preview instance (also t2.micro) I was able to create 1.3 million tables before it crashed.

As pointed out in the comments below (thanks Vincent Milum Jr!), the limit I probably ran into in this experiment was the inode limit. This limit is based on the size of the file system, rather than amount of RAM.

See this article for more about inode limits.

Reaching the limit

Well before I reached the crashing limit -- in the tens of thousands of tables -- pgAdmin became quite slow to load/refresh. To support a database with this many tables, I would probably need to use command-line tools. Since GUI tools tend to preload a lot of information.

When I reached the table count limit, VERY BAD THINGS happened.

The final CREATE TABLE operation received this error (both on the client and in the logs).

53100: could not create file "<file path>": "No space left on device"`

The RDS service also noticed the problem, and flagged the DB instance status as storage-full in the console. I don't know about you, but this error normally would have made me think the storage drive was full. But that wasn't the case. There was about 13GB of free space. I had used about 4GB just to create empty tables!

The really bad part was that the script I wrote to remove the tables could no longer connect to the DB instance. Apparently each connection creates temporary tables. And since no new tables could be created, no connections could be made either. Additionally, some internal processes of Postgres use temporary tables. For example, I saw in the logs that statistics calculations were failing with the same error.

Fortunately, I still had a connection opened in pgAdmin. So I started issuing DROP TABLE statements there. I had to drop about 2000 tables before my cleanup script could connect again. If that pgAdmin connection had not already been open, the database would have been inoperable.

Moral of the story: Going over this limit breaks your DB instance.

Conclusion

In this post, I wanted to make special note of the table count limit. Many other kinds of limits you might approach will degrade performance (CPU/mem) or are easily fixable / monitored (storage space). But the table count limit is seriously breaking. So keep this limit in mind as you think through your database organization strategy.

Top comments (5)

Collapse
 
darkain profile image
Vincent Milum Jr

It sounds like you may have hit the file system inode limit instead of storage limit. Both usually give the same error message. I've seen this error on badly configured PHP instances that store session data to disk, then never run the garbage collection script. TONS of nearly empty files, but hundreds of GB left on disk. Failed due to overwhelming the inode table in the file system.

Collapse
 
zohaibs profile image
Zohaib Syed

Thank you for this article. I was in the process of inserting roughly 50,000 tables and wanted to double check if it would break anything. The 4GB of overhead per million rows and 100,000 table limit for EC2.micro gives me a much clearer idea about the viability of my setup when it comes time to deployment.

Collapse
 
baldrichrandori profile image
baldrich-randori

You should be able to get past the file limit by creating separate table spaces. postgresql.org/docs/10/manage-ag-t.... This allows you move things to separate folder or volumes to improve performance and get around pesky limits like this.

Collapse
 
darkain profile image
Vincent Milum Jr

Also, this article may be of interest to you: dzone.com/articles/40-million-tabl...

Collapse
 
kspeakman profile image
Kasey Speakman • Edited

Wow, I thought 4GB was a lot of overhead for 1 million tables. But that article says MySQL 8 has 4 TB of overhead for 40 million tables.

Thanks for the link.