DEV Community

Kaylan Stock for HarperDB

Posted on • Edited on

LMDB Deep Dive: Interview with a CTO on using the Open Source Key Value Store

Recently, the HarperDB team invited the folks behind AlaSQL, a popular client-side in-memory SQL database, to a virtual Q&A. It was interesting to learn more about AlaSQL and how HarperDB uses AlaSQL on the backend. This got me thinking about one of the other tools we use within our tech foundation, LMDB. While we have not yet had a similar event with the creators of LMDB (hopefully in the future!), I was able to catch up with our CTO, Kyle Bernhardy, to learn more about how HarperDB incorporates LMDB and what it's like to work with the open source key value store. Kyle was the lead in implementing LMDB into HarperDB so it was highly insightful to hear about his experience. You can listen to the full 30 minute interview here.

Alt Text


Kaylan Stock: Well, thank you, Kyle, for doing this interview with me today. I'm excited to learn more about how we implemented LMDB and all that good stuff.

Kyle Bernhardy: I am just here to share what I know.

Kaylan: Well, let's dive in. So my first question. Pretty basic one. What is LMDB? For people that might not know.

Kyle: So LMDB, it's a really fast, really lightweight key value store. And the one differential with LMDB is it’s an embedded datastore. So that means that it embeds in your code. It doesn't run as a separate server. It actually acts as a library. And you just call functions on that library to execute the functions that you need. So that keeps it really lightweight because there's not some extra resource running on the side. So it actually just runs in line with our code.

Kaylan: Awesome, and we love lightweight and compact here at HarperDB so it's a good fit.

Kyle: Simple as possible.

Kaylan: Yes. All of that. So how does HarperDB use LMDB on the backend?

Kyle: Sure. So LMDB is our new data storage mechanism. When we started the company over three years ago, our initial data storage mechanism was something that we had created a patent around and it was based on the file system. And so when you inserted let’s say, a record or an object, we would break it all apart by attributes and then store each element separately as a file.

And that had some real good benefits, but it also had some real big tradeoffs. One of the big tradeoffs was searches. Also, there are some issues on the file system as well with things called inodes (index nodes). And on the scale of data, it sort of fell over on itself with our old data mechanism. So for LMDB that's our replacement data storage mechanism, and it allows us to still do data modeling, very similar without breaking records / objects apart, auto indexing and all that.

Kaylan: So that's awesome. And doesn't LMDB help with performance too or is that more the AlaSQL side of it?

Kyle: Well, we did do some performance improvements in the SQL side. Sam Johnson, one of our engineers, had done a lot of work on our SQL Engine to improve that. But the lower level below that is the data itself. And we got significant performance improvements from C.P.U. utilization, memory utilization, disk utilization and really across the board. We got significant improvements just from the hardware utilization of your computer, server, or whatever it is that you're running.

Kaylan: Yeah. I think everyone here at HarperDB is a fan of LMDB for sure. It's a really cool [product]... it's open source, isn't it?

Kyle: It is. And we're using a node library because LMDB is written in C and HarperDB is written in Node. The really nice thing with Node is you can import C as native node modules. And another open source contributor had created a great library for LMDB for Node. That's a big game changer and the implementation of it was very simple on the node library side. But we're essentially using two open source libraries that’s sort of like the bigger fish is eating the smaller.


Kaylan: That's a good analogy. So obviously, from what we've already talked about, it's been easy. But can you tell me more about the implementation process, what that was like? Was there anything that you disliked about implementing LMDB?

Kyle: Yeah, if you don't mind, I think it also goes in line with why we selected LMDB, because it really is the implementation. These are all very tied together. So I'll probably jump to that. So like I said, this is something that I've been thinking about for almost two years. And it really came down to we had a couple of POCs that we were working on and the feedback that we got from the POC's was [they] really enjoyed the product, but the read performance was not what they needed.

Around the same time we were also getting some feedback from investors and other people in the tech community, saying databases like MongoDB use multiple data stores. MySQL has different ways of using data stores. It's pretty common that the underlying place that you store the data is ultimately decoupled from the database itself. And so it's kind of smart to swap things in and out and give options because depending on use cases, someone may want to use a different data storage mechanism.

I spent the end of 2019 evaluating different key value stores with all the things that we'd learned and also making sure that whatever new technology we implement doesn't break our core mission, which is simplicity, a dynamic data model, a single data storage with SQL/NoSQL… All the things that we always talked about were very important for making it very easy for developers to, you know, put data in, get data out and do complex querying and simple querying, analytics. So I evaluated a lot of different products very quickly and while a lot of them washed out for various reasons, LMDB held up through the evaluation process.

There was a great node module built for it. I could build a dynamic schema around it. It was just very lightweight and it didn't put a lot of constraints on us in order to implement. So once I sort of did a quick “Bake Off”, I started digging in through the month of December. Can we mimic HarperDB’s existing data model, so not implementing it into HarperDB, but creating a very similar data model as if it was HarperDB and then running workloads through that sample.

Then I did a series of tests running these high scale workloads like doing inserts and different SQL queries and searches and all these things, comparing it to our file system data model to LMDB. And then [for] some workloads LMDB was, I think, six hundred times faster, that was on bulk CVS loads. Even for queries it was oh man, I think it was around like 60 times faster or something like that. And just like on all workloads, it was better than what we currently have. And so I did a write up and I disseminated that to the team. Because it's also a big level of effort, at the same exact time we also determined that we also need to roll out a cloud product.

And so Stephen and I sat down and discussed what resources were needed and ultimately decided I would work on this solo so the rest of the team could focus on HarperDB Cloud. So then the implementation process was, you know, the approach I took was to do a modular approach. And two of our engineers, Sam Johnson and David Cockerill, when they were working on a failed co-development project earlier in 2019 with another company’s key value store, they created a mechanism for us to decouple the data storage from our core logic. So there was already a pattern in place. They saved me probably months of work. So creating this modular design of creating the core functions, that just does the create, read, update and delete operations.

And also managing the tables, which LMDB calls environments. So when you create a table, we need to create this environment and then create an attribute when you need to create a separate key value store inside there. How do we track all that? So there is a lot of wiring that was specific to us so that it would then bubble up and make sense for HarperDB. And so creating these foundational modules and functions is where I started. And then building unit tests on top of that. So for everything I built, making sure that there was testing all the edge cases, because then I would test something and realize, oh, that doesn't work.

The most complicated thing I had to spend time on was search, which is no surprise. But, you know, it's just like adding layers on top of layers, and just doing this modular design. It took three months and then the testing took about another month and then it basically lined up. It was hard and by that time the managed service was right to deploy. So the timing all worked out really well and Kaylan you project managed me on that

Kaylan: Of course! I'm just thinking back to that implementation process when at one point you said you had 100 hundred errors when you did a test and then you fixed, like, one thing and then it just like fixed [everything]...that just was crazy to me.

Kyle: Yeah I can't remember what I fixed there..It's all a blur now. As far as implementing LMDB itself it's very easy. It was overall just more like I'd had all my requirements laid out and it just was cranking through them, I was like, I know what I need to accomplish. I know what I need to do. It's just the doing of it and getting through that process.

Kaylan: Yeah, definitely. I think it speaks a lot to how easy it was to implement and use that you did it alone and well, while you were in it, it felt like a long time, but three months to me just doesn't seem like that long of a time to completely implement this new tool. So, yeah, I mean, it's awesome.

Kyle: And to give us that level of performance improvements. Yeah. It was huge. And you're right, it is a very easy product to work with because, yeah, if it was complicated. I might still be working on it.


Kaylan: That just doesn't fit with our style here at HarperDB. And so I guess you may have already answered this in a sense. What's your favorite feature or aspect of LMDB?

Kyle: I have to pick just one?

Kaylan: OK. You could give me five. Yeah. Give me your top five or whatever.

Kyle: Yeah. I think overall, like from an architectural perspective of the product, what I really love about what they did in the implementation is they have something called a memory map. And so what that means is when you go to access or insert data, the memory...it actually assigns the byte address of an entry in that file into memory, and so it acts as if the data I'm trying to fetch is in-memory. So the very first call to get that item of data is as slow as just pulling it off of your SSD. But then the second call, the byte addresses are already cached and mapped in memory. And so it's acting as if it's an in-memory database.

So it has the speed of that, but it has the persistence of an on disk database. And so we're getting the benefits of both. And in any other key value store, I've not seen that implementation. It creates massive efficiencies. And it also aligns with when we started HarperDB, we wanted to leverage the file system. That's exactly what Howard Chu (the creator of LMDB) and [LMDB's] other engineers have done, is they're leveraging how file systems work, how virtual memory works on operating systems and using existing technologies. And it’s a really clever way to solve really complex problems. I think that's overall my favorite thing. It's just super smart about, and efficient about how they're accessing data.

I think the other thing, too, is they're using a B plus tree that just creates really efficient searches rather than like log structure merge tree or LSM, which are used in other implementations like LevelDB. Also, they're (LMDB) natively acidic and so meaning in the database world, it's Atomic, Consistent, has Isolation and Durability and that's native to the datastore. So we're able to lock a transaction. Anything that we're doing during that transaction does not impede the readers. So there's no isolation concerns, and then that data doesn't show up until we actually commit the data. And if something fails inside that transaction, it just rolls back. The readers never see it. So there’s this complete division between the readers and the writers and the writers and the readers. They don't impact each other.


Kaylan: Yes, definitely. I think you've kind of already touched on what your least favorite thing was about working with the tool. Was there anything you wanted to add?

Kyle: As far as specifics of LMDB itself, you know, nothing really comes to mind. I know there were things I struggled with just off the top of my head right now. I have no major complaints, a slight hurdle I had to figure out was handling various data types. I ended up leveraging the Binary data type as it allows for all kinds of data type.

Kaylan: Do you have any tips for people that are looking into using LMDB or incorporating it into their product or project?

Kyle: Yes. Yeah, I think you know just going through a good process of understanding what you're trying to achieve up front. I mean, it's sort of more basic design principles, but specific to LMDB. You know, there are some quirks with working with LMDB before you begin the transaction to make sure you initialize all of your key value stores. Because if you tried to do it inside the transaction, it'll blow up.

So there are some little tricks to how you need to initialize things. I'm just trying to think of some other kind of gotchas. Yeah, opening and closing environments, if you do them in the wrong order, you'll end up in a weird state with your data or your process will hang. Which I experienced one time when I was initially vetting the product, I almost gave up because all the sudden it just started hanging.

Actually to go back. Another [favorite] aspect I have of LMDB is one of the big communities that uses LMDB is the data science community and Python. There's a big community that uses it. And there's a lot of Stack Overflow articles and posts that I could access to understand how to use LMDB. So while I'm using it node, it's still the same tool. It's just the language is different. But the way you interact with it is all the same. So I was able to get online help without having to reach out to the actual development team.

Kaylan: Yeah, that's nice. How long has LMDB been around?

Kyle: Its initial release was 2011, so it's been around for nine years. So that's the other reason for choosing it was it's a well established project. It's been around for a long time. You know, nine years is a long time to work kinks out and bugs and understand different architectures.

Kaylan: Yeah, definitely. And maybe you don't have anything you would change, because I know you've been speaking so highly of it. But if you could change anything about HarperDB's implementation of LMDB, what would it be?

Kyle: I mean, right now, probably nothing. If you talk to me in like six months... but for now, I feel like it's really solid. You know, through our managed service, we're having users hit it and straight downloading the product. And we've not had any issues with data writes. Data reads, like a very low level. So the implementation now feels really solid. The thing is I think any issues would not be on LMDB itself. It would just be on what we did on top of it. I don't see any right now. It's more about what I want to do with it.

Kaylan: Yeah. And on that note, that's like perfect leading into my final question. What do you want to do in the future with LMDB and HarperDB? Where do you see that going?

Kyle: Near term, we are leveraging LMDB to store transactions. This will allow us to show the history & audit trail of your data by time, user & record id. We will also use this as a replacement for our existing clustering catchup data store, which is currently an append only file log. Longer term, I want to allow users to predefine data types for specific attributes, the key benefits will be enabling constraints on data which enhances data integrity as well as improves performance for searches.

Kaylan: Very cool. It sounds like you have a long list of things you want to get done with LMDB so that's awesome. And that actually was my last question. It's just cool to you know, I've watched you implement it, but it's kind of cool to just talk about the process and like, how you came to [choose] LMDB. It's very interesting.

Kyle: The more I researched, the more I read and the more use cases I found that made me feel more confident about the choice that we made. For this to be like our default underlying datastore. It's a great product. I hope someday we get to talk to Howard Chu who created it because he is super smart. I think [it would be] a cool conversation with him.

Kaylan: Yeah, I think I would love to send this to him. I'm sure he'd be stoked to hear all of your feedback. And it would definitely be cool to do a similar showcase that we did with the AlaSQL team. So yeah, that's definitely something we should look into and maybe get on the books depending on if he's open to it.

Kyle: Yeah, yeah. I hope so too. I think he lives in Colorado.

Kaylan: Really? That's cool.

Kyle: I think he might be on the Western Slope, but I'm not totally sure

Kaylan: That would be super cool if he was living right next door to us.

Kyle: He's just down the street.

Kaylan: All right, Kyle. Well, thank you so much for your time. And I am excited to write up some info on this awesome interview, and I'll definitely share the recording out and yeah, I appreciate it.

Kyle: Thank you. This is great. Thanks so much.

The full 30 minute interview is available to listen to here

Top comments (1)

Collapse
 
zilti_500 profile image
Daniel Ziltener

Funny that I see LMDB here today, because also just today I heard that someone ported over Datalog to be used with LMDB (it's called Datalevin)