Serverless Chats
Episode #71: Serverless Privacy & Compliance with Mark Nunnikhoven (PART 1)
About Mark Nunnikhoven
Mark Nunnikhoven explores the impact of technology on individuals, organizations, and communities through the lens of privacy and security. Asking the question, "How can we better protect our information?" Mark studies the world of cybercrime to better understand the risks and threats to our digital world. As the Vice President of Cloud Research at Trend Micro, a long time Amazon Web Services Advanced Technology Partner and provider of security tools for the AWS Cloud, Mark uses that knowledge to help organizations around the world modernize their security practices by taking advantage of the power of the AWS Cloud. With a strong focus on automation, he helps bridge the gap between DevOps and traditional security through his writing, speaking, teaching, and by engaging with the AWS community.
Twitter: https://twitter.com/marknca
Personal website: https://markn.ca/
Trend Micro website: https://www.trendmicro.com/
Watch this episode on YouTube: https://youtu.be/aPg7WE3Q3SQ
Transcript:
Jeremy: Hi, everyone. I'm Jeremy Daly, and this is Serverless Chats. Today I am speaking with Mark Nunnikhoven. Hey, Mark. Thanks for joining me.
Mark: Thanks for having me, Jeremy.
Jeremy: So you are the vice president of cloud research at Trend Micro. So why don't you tell listeners a little bit about your background and what Trend Micro is all about?
Mark: Yeah, so Trend Micro is a global cybersecurity provider. We make products for consumers all the way through to massive enterprises. And I focus in our research wing. So we have a really large research component. There's about 1400 researchers in the company, which is a lot of fun, because we get to dive into the minutia of anything and everything related to cybersecurity, so from the latest cybercrime scam to where I focus, which is in the cloud. So a lot more what I'm looking at is how organizations are adapting to the new reality of things like the shared responsibility model, keeping pace with cloud service providers, adjusting to DevOps philosophies, that kind of thing, which is a lot of fun.
And for me, I come from a very traditional security background, if there is such a thing. I've been at Trend for a little over eight years. Before that, I was with the Canadian federal government for a decade, doing all sorts of different security work, a lot of nation state attacks and defense, things like that. And my background in education is actually in forensic investigation, so that nerd in the lab on your favorite crime drama when they come up with the burned-out hard drives and are like, "Fix this," and somehow they do, it's all BS, but that's technically what I do.
Jeremy: Very cool. All right. So I have wanted you on the show for a very long time, because I've been following the stuff that you've been doing. I love the videos that you do, the blogs that you write. You're just out there. And I know you're on the edge of the serverless space, I know you do a lot of stuff in the cloud as well, but you're obviously into serverless as well. And just recently I came across this impact assessment video series that you're doing. I don't know if it's a regular series or whatever, but it was really good. And you were talking about Fortnite and Apple, and I want to get into that. But really what made me think about things a little bit deeper that goes beyond just some of these surface-level billionaires arguing with billionaires is this idea of privacy and how important are online privacy is. And I thought it'd be really interesting to talk about how serverless and privacy, since it's in the cloud, is all the stuff that you're sharing, where that kind of aligns. So let's start. First of all, why is privacy or online privacy so important?
Mark: Yeah. That's a really broad and great question. So yeah, this new video series I'm doing, Impact Assessment, is going to be regular. I was doing a livestream called "Mornings with Mark" for the last few years, did, I think, like 200 episodes where it was mainly talking about cybersecurity issues of the day, and a lot of those are privacy. And where I wanted to go with this new series was just a little broader audience, which is why Apple and Fortnite and Twitter hack and stuff like that are coming up, because I think privacy is a really important aspect, and it mirrors security. You can't have one without the other. And it's directly related to the audience, to people who are building in a serverless space or in any space.
But privacy, a traditional definition of privacy is really your right as a person to not be observed, essentially to be alone and to have control over your data and your well-being. And when you go into the digital world, it's infinitely more complicated than a physical world, right? You can lock yourself away in a room in the real world and be relatively confident that nobody is invading that space, that you have kind of control over that space, so if you want to just sit there and veg out, if you want to read a book, that's an activity just amongst yourself, right? When you come to the digital world, everything we do leaves a trail somewhere. There are tons of exposures potentially. You as a user don't really have a ton of control over your data.
And one of the things that I wanted to do with this video series and with a bunch of my other work was just enlighten people to help sort of expose this so that they're aware, because one of the challenges I get on the security side of what I do, and it directly relates to the privacy side, is that people assume there are correct decisions. And really, the only incorrect decision is one that you are unaware that you're making. So you could make the argument that it's okay that you're tracked everywhere on the internet, and I think the trade-off you get for the free services may be correct, but if you're unaware that that is the trade-off, I think that's the problem. So that's the intention behind this video series, is to look at privacy issues, to look at some security issues, to help people just make a conscious decision instead of just being pulled along for the ride.
Jeremy: Right. Yeah, no, and I think that that's probably something that a lot of people miss, is that people say, "Well, I'll sign up for Facebook, and I will share every photo, every place that I visit, all my friends, all my likes, all my dislikes." And what I think people say is, "Oh, well, whatever. It's free." And they don't realize that they're the product, and most of that is because they are giving up so much of their privacy.
And it's actually funny. This just happened the other day to me, and I didn't even realize. I knew it was coming out, but Chrome just released a new update that blocked third-party cookies if they weren't... I think you had to have like "secure" on and some of these other things. So no user is going to have any idea what that actually means. But what happened for something we were doing is, we were loading a third-party cookie behind the scenes for something, and all of a sudden that stopped working. And so the whole flow of this module or this modal pop-up thing completely broke because of that extra thing of security. And I remember way back in the days of early web development dealing with IE5 and IE6 and the browser wars, like what works on this browser and what works on that browser. Now privacy seems to be the new browser war thing that are conflating those two things.
But anyway, so that's one thing, but let's go to this idea of the Fortnite and Apple thing, because I have two kids, two daughters. They've played Fortnite more this summer than I think... I don't know how anybody could play Fortnite more than that. But they love it. And then I told them the other day, because you and I were talking, I saw your assessment video about them not releasing it on iOS because of the whole Apple Store thing and all this kind of stuff. But why is it a good thing, I guess? And maybe we can talk more about Fortnite. I mean, I'm not really into it. I know you are, but I'm not really into it. But maybe we can talk more about why that review process, why that purchase process through Google Play or through the app store, why is that important to your security and to your privacy?
Mark: Yeah, and I thought this was really interesting. So I got into Fortnite a couple of years ago when I did a piece on it for my regular radio column here in Canada. And I thought it was interesting because it's a microtransaction model game, so it's always taking a lot of money from people, but not to win the game. It's purely cosmetic. And I thought that in general, especially as a parent myself, I thought that was a really positive thing, because it wasn't like a bunch of these games where you need to pay to actually have a realistic chance at winning. The only thing you're paying for in Fortnite is to make things look different. There's no performance differences, right? And since that... Then there was this great Saturday Night Live sketch a couple years back on Fortnite, where this character Adam Driver was playing was solely there to learn how to be better than the stepfather, to show off to the kids. And I always think, "That's me," even though... Just trying to be cool to the kids. But I do play regular.
I thought it was interesting, you know, being pulled up in this drama, because most of the drama between Epic and Apple and Google somewhat right now is related around the business side, because the Apple policy... and Google is the exact same, but we'll just use Apple because it's more prominent right now... the policy basically says, as a condition of being in the app store, you need to follow a whole bunch of these rules. And the rules that Epic is calling out is the one around transactions, and it says basically, if you're taking money through the $, so directly through the app, Apple gets a 30% cut. That's their fee as a middleman for bringing you customers. And as a part of that, Apple will facilitate the transaction. So for Apple users, you're well familiar with this. For Android users, it's similar. But that's why you can use face ID to authorize a transaction through Apple Pay, and you don't actually have to enter a new password. You don't have to give them your credit card information. All of that stuff is handled by Apple as a proxy for those businesses.
And so Epic, they make north of $300 million a month from Fortnite. And they said, "You know what? 30% of the chunk we make from mobile, which is north of $100 million, is too much." So they are contesting that, and they actually have plans, and in their legal filings are saying, "We're not going for the money. We want the right to be our own app store." So there's a really interesting business case there, and they're really petty and low blows, which is fascinating and fun to watch from the outside.
But I did a video in the assessment around, what do we actually get from a security and privacy perspective? Because everybody is saying, "Oh, 30% is a huge amount," even though it's not uncommon in the retail space or in other business transactions. But there's a lot of stuff that goes on behind the scenes, and that's really beneficial to us. So when you submit as a developer, Apple makes sure that there's no obvious malware, though this week there was a case where they actually approved malware, which is one out of eight years of app store, which is not bad. They look for malware. They look for using undocumented APIs, which could create vulnerabilities. They look for your use of personal data, which is what I really dug into, was that they have restrictions around what developers can do with your data, how they can track you, what they have to ask permission for.
And that actually goes to your transactions as well, because a lot of the stuff that happens behind the scenes that we don't even think about is when you go to a store, like a retail store, if you still can in these days, and use your credit card, most of the larger retailers actually track that credit card usage within their physical store. So they will take a hash of your number instead of storing your actual number, and they will look for that reused to create a profile for you if you're not actually signed up for the loyalty rewards thing. Same thing happens online. So not only is the money important, but the more... having someone between you and your customer means you can't track them as much. So from a business perspective, they're saying, "I want the data to be able to track Jeremy and Mark more accurately." But as a user, we want Apple or Google in between us, Apple, definitely more so than Google, given the business models, because they're that blocker. They're preventing us from having our privacy unknowingly breached, in that people are tracking our transactions online. And that's part of the big thing we get through the app store.
Jeremy: Yeah, and I think that having that broker in between is another major thing that dramatically helps with privacy, just from a... Not only privacy, but I guess security as well. And I never use anything but PayPal on most sites that are not amazon.com, because I don't trust some little site.
I mean, actually the funny thing is, I just bought something that I was almost... It was from one of those... What's the store there? My mind is drawing a blank here, but the... Shopify, right? It was a Shopify store. And essentially Shopify says, "Yeah, anybody can build a store." I don't even think they check, and I may be wrong on that, so I apologize if that's wrong, but it seems like it, because there's a lot of stories of Shopify scams. And there was this thing listed, and it was actually a pool for... It was one of those Intex pools, those temporary pool things. We just needed something. You couldn't buy them anywhere unless it was like thousands of dollars, which was crazy. So I saw this deal, and I'm like, "I'm going to buy it, but I know it's a scam. I'm almost 100% sure it's a scam." But I used PayPal, and I knew that the worst case scenario was I'd have to send a few emails back and forth and I'd get my money back. It turned out to be a scam.
But if I hadn't, if I had given that person my credit card number, who even knows if that credit card number would have went to a valid processor, or if it would have been run through some third-party thing, or it would have had thousands of dollars of transactions across the dark web or whatever. So I do think that there is a tremendous amount of added benefit to having that middleman protect your privacy.
Mark: Yeah, and that's an interesting example. And I'm sorry that you got scammed. And I understand, especially in these times, trying to get those items in. Because at the start of the pandemic, it was like basketball nets, trampolines, bikes. You couldn't get this stuff, right? And the nice thing is PayPal as a middleman works. There's some downside when you're the collector from PayPal, for sure. But Visa and MasterCard have the same protections in place. It's very rare that you're going to be financially on the hook. But the difference is, it's a pain in the butt to go back review where you have your normal subscriptions charging to your credit card and things like that, to redo all of that. So even though you're not out money necessarily, you're still out time and frustration.
And that's happened to me pre-pandemic when I was traveling, literally one time when I crossed at the US customs to here in Canada. We cross in the airport itself, and I found out when I tried to buy some food that, oh no, my credit card had been blocked, and so I had to get a new one shipped and all that kind of stuff. So I wasn't out any money. I was just out of frustration.
But there is important aspects, both advantages and disadvantages, to the middleman. But specifically when it comes to that online, a great example there of knowing that there's a good potential for a scam, understanding the risk of, okay, a couple of emails? It's not that big of a impact to you to try. And the upside where, if they did actually ship you the inflatable pool, you're the hero to the kids and happy and cool. So it's finding that balance. And again, like we said in the intro, is really, for me, it's, there's no bad decision. It's just making it explicitly. So you just gave a fantastic example of explicitly understanding that you might get scammed here. There's a high chance of it. But then you used a way to protecting yourself. You had four options to pay, and you picked the one that was going to provide you the most amount of protections, because you were aware of the situation. And I think that's commendable. I think the flip side is, most people are unaware on that scale of what we're doing in the online world, of the types of ramifications of those decisions.
Jeremy: Right. And so speaking about unaware, I mean, one thing that I think people might not understand when they make financial transactions or they share data, they're often giving it to a machine, right? And we think it's super secure if we just slide our credit card in with a little chip on it, or if I enter my information on a website somewhere, or I save my password or something like that and I know it's only saved locally. The problem with people is people, right? And I love people. Don't get me wrong. But once you introduce the human factor into any of these security or privacy issues, or potential privacy issues, it gets exacerbated because people are fallible and people make mistakes.
I think the most important one that happened recently is this Twitter hack. And people are like, "Oh, Twitter got hacked." Well, it depends on what you mean by hacked, because nobody brute-forced into and broke into the system and figured out somebody else's password. They literally scammed people who had access to this stuff. It was a social engineering attack. So how do you prevent against that?
Mark: Yeah, and this is the challenge. And so one of the things for those people who look into sort of the history of my work, I always feel like I'm an outlier, because a popular sort of feeling in the security community is what you just said to the extreme, that the users are a problem. Everything would be great if we didn't have users. Well, we wouldn't have jobs if we didn't have users, so put that aside. But the reality is, people very rarely are trying to do something in their daily work to cause harm. So criminals, obviously that's their daily work. They are trying to cause harm. So this case of Twitter was that the people who were doing the support work were just trying to support users and to get their job done, right? Now, it turns out that Twitter was a little lax and they had about 1500 people with access to the support tools.
But if you step back for a second... Okay, ignore the hack. It totally makes sense if you're running a service that supports 330 million people that you as a business are going to need some tools to be able to reset passwords, to adjust email addresses, to give people access back to their accounts, because someone's going to forget their password and not have access to the email that they signed up with legitimately. They're going to change phone numbers, so they don't have the SMS backup. Stuff happens, especially when you have 300 million plus users. So to build a tool to help you deliver better customer service 100% makes sense. The problem in this case, as you pointed out, is that it was also a vulnerability, because the controls around it, the process around it was a little too lax, these cyber criminals didn't do any crazy hack. And I think if there's one fallacy on the security side of things, it's that, and it's partially because of all the TV and movies, which makes for great TV and movies, but very rarely do big-name hacks actually use anything remotely resembling state-of-the-art hacking. Nine times out of 10 it's a phishing email. Actually, 92% of all malware infections start with a phishing email, because they work. They're super easy to send and to confuse people. I always remember a talk from Adrienne Porter Felt who's at Google. She was in the Chrome team at the time. They'd done a massive, million-plus-person study, and basically the key result was, nobody reads security prompts. So it doesn't matter what you prompt the user, they're just going to click okay. Which is frustrating, because you're trying to educate them and move forward.
So with the Twitter thing, it was just a social engineering attack. They got some extra access by basically just tricking a support employee, which then got them access to the Slack. In the Slack channel, to make the support team's lives easier, they had some credentials posted that said like, "Hey, to get into the big super tool here, here's the login. Here's the URL." Which, I mean, you totally understand, working with a team, that you drop stuff in Slack like that all the time, because the assumption is you're in a private room, right? And in this case, that wasn't it. And thankfully it was a very visible hack, so it got shut down very, very quickly.
But it's these of things that I think are interesting, because my point in that particular video was, most people who use an account, A, assume it's theirs, when you're just actually using it, you're renting it kind of thing. And they aren't aware that there's a support infrastructure behind it that gives people access legitimately, because if that was you who lost your password you'd want access back to your account. You've worked hard to grow your social media following. So it's, again, being aware of those trade-offs
Jeremy: Yeah. And again, there's so many examples of things where people are sharing a lot of information that's getting recorded and they probably aren't even aware that it's being recorded. I mean, every time you talk to Alexa... "Alexa, cancel," because she's just going to come up on me. And then every time you talk to Siri, every time you type on your computer if you have Grammarly installed, all of that information is being sent up somewhere. And so when you introduce... Even if you have the best security protocols in the world, and you're in AWS Cloud or in Google Cloud and you're all locked down, you still have that potential that somebody could simply accidentally share their password to some super tool, like you said, and your information gets shared. I mean, think about S3 buckets, right? Apparently S3 buckets are just... It has been one of the biggest... Or I guess the Capital One breach, right? Is this idea that you just make your things public, or you make it easy for them to be copied, or whatever it is. You don't do it on purpose, but those are human mistakes that are causing those issues.
Mark: Yeah, and there's a lot of trust there. So there's a couple examples that I think are really interesting that you gave there. So the voice assistants are popping up more and more in court cases where the law enforcement are actually requesting access through legal process to the records of what they have heard, because Alexa is a good example, and sorry if I triggered yours or any of the audience's. I have a good voice for that, apparently. But if you go into the app, you'll see actually a history of all your commands, everything you've asked for and whether or not it... Because you can provide feedback like, "Yes, it gave me what I wanted. No, it didn't."
You had mentioned for keyboards on phones and stuff. So Grammarly is a good example. When iOS started allowing keyboards, third-party keyboards, I thought it was really interesting, because one of the prompts that people don't read that pops up says, "You are providing this keyboard with full access to everything you type." So everything you literally are typing, even if you delete it, is being sent to the cloud and back. Is that a bad thing? Not necessarily, but if you don't know that that's happening, you can't make that choice. And that's really the thrust of a lot of what I'm doing, is understanding that work. Because at the end of the day, one of the things I hear often on the privacy side is, "Well, I have nothing to hide. I don't care." And a lot of the time that may be true, but you still need to be aware of those data flows that are going out from you out into the world. And that's where things get more and more complicated the more technology we add.
Jeremy: Right. Yeah, I totally agree. All right, so let's take this into the serverless realm here, because this is a serverless podcast, but I think this is super exciting, because I'd be interested to get your perspective on where serverless and privacy meet. And I think if we take a step back and we look at security first, I think we know, I think this has been demonstrated, that the security of a serverless application, just based on the shared responsibility model, how little you need to do from a maintaining a server standpoint, from even just... There's no direct TCP/IP access into a lambda function, for example, right? Like that all has to be routed through a control plane. So you just have all these levels of security. So the majority of the security concerns from a serverless perspective are going to come down to application-level security. And we have talked about it at length.
And again, people make application security mistakes all the time, right? And the social engineering aspect of it is something where giving someone your password into an admin that you build for your customers... But I want to take it a little bit further and go beyond just this idea of, maybe we make an application mistake, maybe something gets compromised, maybe someone shares a password here. So from a serverless perspective, if I'm building a serverless application, how do I start building bulkheads around my application to protect some of this private user data?
Mark: Yeah, and that's a really good setup. It's a good explanation. I 100% agree by default serverless gives you a better chance at security, because you're pushing almost all the work to the service provider, right? That's a huge advantage, which is why I'm a massive advocate of serverless designs.
So maybe it's easier just to clarify for the user as well, because we've bouncing back and forth, focusing on the privacy, talking a bit about security. I said you can't have one without the other. And really, security is a set of controls that allow you as the builder or even you as the user to dictate who and what has access to that data. And then privacy is just the flip side of that of me going, "This data is about me, and I want to know who I'm entrusting it to." And security is then the controls that you... If I entrust you with my personal information, security is then the controls you're putting on top of that information to enable privacy, right? So they're intertwined. They are linked concepts.
So if you as a builder are creating an application that is handling personal data or handling any type of data, you're fighting this inherent sort of conflict of nature, in that we've been taught as developers for the last few years that the more data we have the better, right? The more data that we're tracking, the more awareness. We can get better fine-tuning on our application. We can increase the performance. We can increase the reliability. We get a better operational view the more data we have. From a privacy and a security point of view, the more data you have, the bigger the liability you also have.
So you need to first go through and make sure you understand what type of data you have. So cold start time on a lambda, total route time for a request, those kinds of things aren't sensitive to specific data. They're sensitive somewhat to your application, but in general, that's not something you need to take... You don't need to lock it in the vault that's encased in concrete, thrown into the ocean so that nobody can ever get to it. If I'm dealing with your social security, that's a far more private piece of information that I need to take further steps to protect. If I'm dealing with your health record, same kind of thing. So it's first step for anybody building any application is just listing the types of data you're actually hosting and processing and then mapping out where in the application they're required.
So for permissions, we have on the security side the principle of least privilege, which is essentially, "I am only going to give you the bare minimum permissions you need to access something," which is the S3 problem at its core. When you create an S3 bucket, only the user or entity that created it has access rights by default, and then everything else has to be granted. And all of these breaches, billions and billions of records over the last few years, have been because somebody made a mistake in granting too many permissions.
So understanding what the data is and where it actually needs to flow and saying, "You know what? This health information isn't going to flow to the standard logs. We're going to keep it in a Dynamo database, and that Dynamo database, that table is going to be encrypted with this KMS key, and it's actually going to break our single-table design, because this information is sensitive enough to merit its own table, because I don't want to take the risk of encrypting column by column, because I think I might mess that up. So I'm going to just separate it completely to make it a logical separation to make it easier." So really, step one is mapping that out and then restricting the breadth of that data or where that data touches, and that does a huge amount of effort, a huge amount of the work to maintain privacy right there.
Jeremy: Right, yeah. And so if you're taking that data, though, and you're... And again, I think this makes complete sense. You're saying, "Look at what it is you're saving. If I'm saving somebody's preference, even if I'm saving somebody's like, whether they like a particular brand or something like that, is that really personally identifiable information? Is that something that I have to lock away and encrypt? Or can I be more lax with that? What about usernames and passwords and things like that?" And I think that all makes sense. Think about it that way. But I think where I'm curious where this goes is, you only have so much control over that data if you are saving it in DynamoDB, right? If you are capturing it through CloudWatch logs-
Capturing it through CloudWatch logs because it's coming in, and maybe it's coming in and it's not encrypted, I mean, even though you are using an SSL or TLS, you come through and the information is encrypted from the user's computer or their browser, into the inner workings of AWS, for example. Then once it gets into that Lambda function, that's all decoded, that's all unencrypted. Right? That's all ready for you to do whatever you need to do. So then you need to take that and put that into a database, or send that off somewhere, or call an API, or any these other things. When you do that and you save that data into, let's just start with DynamoDB, there are backups, those are automatic backups. Right? There's again the CloudWatch logs. So this data is going all different places, so that seems like a lot of effort to make sure that a credit card number, or social security number, or anything that you want to be very careful about, that you have to take a lot of extra steps to make sure that's encrypted.
Mark: Yeah, and I think this is spot-on example. And I think this is the number one failing of the security community over the last 20 years or so. And there's a lot of logical reasons for it, is that right now, the vast majority of security work, so that security work to ensure that privacy of data is done after the fact. Right? So if you think of your DevOps wheel and you've got the development side and the ops side, security exists almost entirely in the ops side. Which means we're taking whatever's already been built and then doing the best thing we can. So we end up with this very traditional castle wall sort of scenario of like, I've made a safe area, drop things into it, and they will be safe from anything outside of that wall. But unsafe from anything inside that wall.
And that's had mixed results, I think is a generous way of saying it. And realistically, if you think of security is really a software quality issue, and we know you're not going to do testing only in production, you're going to do testing early stages, you're going to have unit tests, you're going to have integration tests, you're going to have deployment and functionality tests, You're going to do blue-green deployments to make sure that things are running before they hit prod. There's a whole bunch of testing we do as builders before we get to actually interacting with users. We need the same thing from security, because what you just mentioned is a lot, if you're thinking about it once you've designed the solution, but if you're designing the solution with those questions in your mind, as you're going forward, it's actually not a lot of additional effort to map out these security things as you're sitting there.
If we're starting up a new application, me and you, we're doing Jer and Mark's super cool app. Right? And we go, okay, we're going to start logging in users. Well, we look at that and go, well, the bare minimum, we have a username and a password. So we're going to have to do something with that, we need to know what that flow is. So maybe we're going to loop in something like Cognito. Maybe we go, you know what, Cognito is not quite where we need it to be, so we're going to go to a third party Auth0. So now we're outside of, if we're building it in AWS, now we're outside of our cloud into a third party with a whole different set of permission sets. But if we're designing that from day one, we can map that out and go, okay, we know we get TLS from the browser to Auth0.
We know that TLS doesn't actually guarantee when talking to Auth0, it just guarantees that a communication is secure in transit from A to B. It doesn't tell you who A or B are, which is a mistake a lot of people make. But then we go, okay, we're going to Auth0, fine we've got a secure connection for the user there, we verify who that user is from Auth0, our app will verify Auth0. This is the following method. And then we're going to take that data, and we're going to make sure that we don't actually store it, that we don't actually log the user, because what we've done is we've never taken the password out of Auth0. We've just gotten a token and now we map it there.
So I think if you go after the fact to try to do this, it's really difficult. So even if we just simplify the example down to encryption, the thing I know you always see Vernors shirt, "Dance like nobody's watching, encrypt like everybody is." Love that shirt it's so dirty, it's amazing. But if you take an existing application and say, okay, we're going to go encrypt everything in transit and at rest, that's an annoying massive project that has no direct, visible benefit to the customer. It's really hard to get those things past the product manager, because you're like, Hey, I want to take, four sprints to do this work that will save us potentially if something may happen bad, like if a cyber criminal attacks us, we will be protected. But our customer's not going to see anything for four sprints because we're not doing any feature work. That's a hard sell.
Whereas when you're designing that out of gate one, and you say, I'm just going to add a parameter and it's going to encrypt everything in transit, and I'm going to add a KMS parameter to the Lambda, and everything's going to be encrypted at rest that took five minutes and we're done. Nobody's going to bat an eye and you get the same end result. So it's really about planning ahead, I think.
Jeremy: Yeah. Well, I think security first. I mean, I think that's the first thing just with the cloud and so many of these problems that happen from breaches that are again, not necessarily a vulnerability of the cloud, it's just more of these social engineering things. Then again, thinking about security right off the bat is a huge thing. And I guess here's another thing, and I know that like DynamoDB, for example isn't, you can do encryption at rest. Right? And that things like SQS and SNS, I think those have an encryption in transit as well and... Right? So there's a lot of that security built in, but again, all of those really great tools that the cloud provides and the encryption and whatever else, that goes away the second you build an admin utility that someone can log into and just query that data. Right?
So what do you need to do around that? What should you be thinking in terms of, I mean, are there multiple layers, should we be thinking... You always hear things like Tier one, tier two support, things like that. Are those levels of access that they have to your private data? How would you approach that when you're building a new application?
Mark: Yeah. And the tiering system is frustrating as it is for a lot of users, a lot of it does have that. If we use the AWS term, it's about reducing the blast radius. You don't want everyone in support to be able to blow up everything, and if you look at the Twitter hack was actually an interesting example, somebody raised the question and said, "Why didn't the president's account get hacked?", "Why wasn't it used as part of this?" And because it has additional protections around it, because, it's the leader of the free world ostensibly so, you want to make sure that that's not the average, temporary employee on a support contract, being able to adjust that. So the tiering actually is a strong play, but also understanding that the defense in-depth is something we talk about a lot in security. And it gets kind of a bad rap, but essentially it means don't put all your eggs in one basket.
So don't use one control to stop just one thing. So you want to do separation of duties. You want to have multiple controls to make sure that not everybody can do certain things, but you also want to still maintain that good customer service. And I think that's where, again, it comes down to a very pragmatic business decision. If you have two sprints to get something out the door and you go, well, I'm going to build a proper admin tool, or you're just going to write a simple command that your team can run, that will give them the access, you're just going to write a command that does the job. And you know what, in your head, you always say the same thing.
You put it in your ticket notes, you put it in your Jira and you say, we'll come back to this and fix it later. Later never happens, so most admin tools are this hack collection of stuff just to get the job done. And I totally get it from a business perspective. It makes sense. You need to go that route, but from a security and privacy perspective, you need to really think holistically. And I think this is a question I get asked often, actually, somebody just asked me this on my YouTube channel the other day, they said, "I'm looking for a cybersecurity degree, and I can't find one. All I can find is information security. What's the deal?" And I said, well, actually, what you're looking for is information security. In the industry, and especially in the vendor space, we talk cybersecurity because that's typically the system security.
So locking down your laptop, locking down your tablet, locking down your Lambda function, that's cybersecurity, because we're taking some sort of cyber thing and applying security controls to it. Information security is an academic study, as a field of study in general, is looking at the flow of information as it transits through systems. Well, part of those systems are people, are the wetware. Right? Or the fact that people print it out. This is a big challenge with the work from home was, you said, well, your home environment isn't necessarily secure. And you said, well, yeah, it has different risk models. But the fact that I can connect into my corporate system and download a bunch of stuff and then print it, that's information, that's still needs to be protected.
So I think if you think information security, you tend to start to include these people and go, wait a minute, Joe from support, we're paying him 15 bucks an hour, but he's got a mountain of student debt. He's never going to get out of it. That's a vulnerability that we need to address, not from locking it down, but help that person out and make them feel included, make them feel, as part of the team so that they're not a risk when a cyber criminal rolls up with some cash and says, Hey, give me access to the support tools.
Jeremy: Right. Yeah. No, I mean, and the other thing too, when you're talking about, I guess people having access to things, one is having access to data. Right? And so if you have an admin account that can create other accounts, right? And you get into the admin account or the admin account does everything for example. That's really hard to prevent against if you let that go. Right? But there's another vulnerability with admin accounts, especially when it comes to the cloud, is any time somebody has access to a production environment. Right?
So with AWS, if people are familiar, and I'm sure this is true with all the other cloud providers. You have multiple log-ins that are called roles in AWS and you can grant access to certain things. And the easiest thing to do is when someone's like, Hey, I can't mess with that VPC, or I'm trying to change something here, I'm trying to do that. Okay fine, I'll just give you admin access. Right? So admin access gives you everything except for billing access for some reason, but it gives you everything in the AWS cloud. And I'm not saying, I mean, you need that somebody needs to have admin access at some point. But when you're writing code that could potentially expose data by maybe having a vulnerability in an admin tool or just giving too much control in an admin tool, there needs to be a process that separates out, the development environments, the staging environments, and then that production environment where all that actual, sort of production user data is going to go.
So I always look at this as like... And that maybe people don't think about it this way, but to me, CI/CD having a really good, whether it's Gitflow or something like that, that has control as a place where there are very, very, very few people who have keys to that main production account. Everything else is handled through some sort of workflow, with approval processes and things like that. And I mean, to me, that is the staple of saying you want a secure environment, you have to set up CI/CD
Mark: Yes, a 100%. So my general rule of thumb is nobody should ever touch production, systems should touch production. And so the pushback I get on that a lot, especially for people that are still in virtual machines or instances are like, well, no, I need to get data off of there. You should have a system that pulls data out and logs it centrally so you can analyze, because if you need to make a change, you push it through the pipeline because not only is that better for security, that's better for development as a practice in general. For those of you who are watching this episode, you can see how much gray and white is in my beard. For those of you just listening, think like Santa levels of white, and I've been doing this a long time. And the inevitably, I used to be keyboard jockey doing the Saturday night maintenance windows for Nationwide Networks.
And you're typing the same thing in the system, after system, after system, you had your checklist, you did everything you possibly could to prevent yourself from making a mistake. You still ended up making at least two mistakes per change window, per Saturday night because it's late night, you already worked all week, you're only human. Mistakes happen and enforcing a consistent threes through a CI/CD pipeline. Not only gives you the security benefits, but it gives you the reliability that if a mistake did happen, it's the same mistake consistently across everything, which means you can fix it a lot easier. It's not that there was a different typo in every system, there's the same thing on every system, so you can roll forward. And that's an absolutely critical thing to do because, a lot of the time people see security as this extra step you need to take as this conflicting thing, it's going to slow you down at the end of the day, security is trying to achieve the same thing you are as a builder.
We want stable, reliable systems that only do what you want them to. That only act as intended as opposed to some vulnerability or mistake being made that people could leverage into making that system do something unintended. And that, CI/CD pipeline is absolutely critical. You mentioned roles. There are equivalents in GCP and Azure as well. My big thing is accounts should have no permissions at all, other than the ability to assume a role. So if you can assume a role as an account or as an entity, then for specific tasks, you have a role for every task.
So if I need to roll a new build into the CI/CD pipeline, don't give me a permanent rights to kick off builds. Let me assume a role to kick off a build to kick off the pipeline, because then I don't make a mistake, but also we get an explicit log saying, at this time I assumed this role, it's cryptographically signed, it shows that chain of my system, made that request in the backend, and then after assuming that role, I then kicked off this build and you just get this nice fidelity, this nice tracking for observability on the backend. We're so obsessed on observability and traceability and production. You need it as to who's feeding what into the system. And then that way I don't make a mistake and we get clarity. So it's roles are a massive win if you use them right.
Jeremy: Yeah. And I think things like CloudTrail and some of the other tools that are built into AWS, I'm sure a lot of people aren't looking at them, but they should be. But so the other thing, it's funny, you mentioned this idea of, doing late night support. So I think we've all been there. I mean, if you're as old as us, I have as much gray as you do. I try to hide it a little bit, but I remember doing that as well. And I still have some EC2 instances that I have to deal with every now and then. And one of the most frustrating things about trying to do anything, and I think this is why security people try to find workarounds for it, is because security creates friction. Right? So the more friction you have, you can't access my SQL database in a VPC from outside, unless you set up a VPN or some other tunnel that you can get into. Right?
I think about every time I log into my EC2 instances, first thing I do, sudo su. Right? Because, I just know, I don't want to try to go to the logs directory and not be able to get to a logs directory because security is preventing me from doing that. And, so again, being able to have ways in which... It's almost like people have to build those additional tools. Right? So you mentioned only machines should be touching these things or systems should be interacting with it. But those are systems that somebody has to set up, those systems that somebody has to understand. Right? So again, I totally agree with you. I'm a 100% with you. It's just one of those things where it's like these tools are not quite as prominent or they don't seem quite as prominent as some of these other workflow tools are, again like even CI/CD you could build a whole bunch of security measures into CI/CD but I think people just don't.
Mark: Yeah. I agree and so I'll give you a good example that I can't remember who told me, but after talking in an AWS Summit two years ago, somebody gave me a brilliant example that they had set up that I thought was a really good demonstration of how security should be. Now, it almost never is, but it should be. And it was exactly that problem was that they still had cases where people had to log into EC2 instances. And they were trying to figure it out, they knew they couldn't just say no. So what this team had set up was a very simple little ping little automation loop that as soon as somebody logged in some of the SSH and EC2 instance, CloudWatch logs would pick it up, it would fire off a Lambda and it would send a Slack message to that user.
And it would provide a button. And it would say, Jeremy, was this you logging into EC2 instance ID, blah, blah, blah. Yes or no. And if you hit, yes, it would then provide a second little message, it's just like, Hey, we're trying to cut down on this. Can you let us know what your use case was? What were you missing? Why did you have to dive in? Why did you have to log in? But if you said, no, it would kick off the incident response process, because somebody logged in as you, and it wasn't you. And I thought that was a really good example of saying like, look, we know we want to be here. We can't get there yet. So we're going to take a nice little friction-free response of sending out the standard survey to everybody and be like, how many EC2 instances do you log into and open? Nobody cares.
But catch them in the moment. And I think further to the bigger question of, yes, somebody has to build those tools. Somebody has to develop those. And again, if you try to get that past a product manager, it's not going to happen because there's no direct customer benefit, It's against a theoretical issue down the road. The challenges or the failures on the security side, for the longest time, the security teams have been firefighting nonstop and have developed this rightfully so reputation of being grumpy of saying, no, I'm putting roadblocks in place that preventing people from achieving their goals. So people just ignore them or work around them.
That's not what we as a security community we need to do. We need to work directly with teams, we need to hear a thing like you just sat and said, okay, no problem, we're going to build that for you, we're going to make sure we build some sort of flow that gives you the information you need in a way that we're comfortable with from a security side, so that there's no friction in place. And that is a huge challenge because it's cultural and the security teams continue to firefight and can't kind of get their head above water long enough to go, Oh, we could do this in a way better way, instead of just continually frustrating ourselves and everyone we work with.
Jeremy: Right. Yeah. And I mean, the idea of being proactive versus reactive would be very nice. I know every time you get that thing where you're like, okay, something is not right. You can hear everybody in the IT, all the developers just sigh at once because you're like, ah, this is going to be a long night. We're going to have to figure out what exactly is happening here.
All right, so let's go back to privacy for a second, or maybe for more than a second. Another piece of this that is important is we are saving data into somebody else's systems. Right? We mentioned DynamoDB, we mentioned, SQS, and some of these things are encrypted and that's great. But you've got GCP, you got Tencent, you've got Alibaba, you've got Microsoft Azure, you've got Auth0. Right? So you're saving data, personal data into other people's systems. So I guess sort of where my question is from a privacy standpoint, I can put all these controls in place and I can say, Oh yeah, I've done all my tiering, I have all the security workflows, I've got CI/CD set up, my admins are locked down and I know whatever. But where does, your responsibility as a developer start and end when it comes to privacy, when it's being saved on somebody else's system?
Mark: Yeah. And that's a very good question because there are legal responsibilities and then there's "The right thing," quote, unquote. For various definitions of the right thing. I think most users expectation is that they have a modicum of control over their data. Now, the interesting thing here, as we start to get into a little international differences. So if people have been listening to the episode so far, have probably figured out that I'm Canadian by my accent, and Canada has a very strong privacy regulation. We're not quite as strong as it used...
Jeremy: I did not say tell us about yourself though.
Mark: Which is fair. So the Canadian perspective, we have a legal framework and a different expectation, the European expectation is completely different. The outlier when it comes to privacy is actually the United States. Now, the interesting thing is the United States is also the generator and the creator of the vast majority of the technology that the rest of us use. So when we look at the legal requirements, there're different things. When we look at what you should be doing and what the expectation is, it really comes down to cultural. So what a European citizen would expect to happen with their data, is very different than somebody in the United States, because there is a cultural and a legal expectation in the EU for their data to be treated very, very differently. So the generic answer is when you're building out serverless applications, specifically, you need to make sure that whatever level of data you're dealing with, the service that you're leveraging can support the controls you want around that data.
So if we look at PCI, which is the Payment Card Industry framework, there is a legal requirement. If you're taking credit cards to have certain security controls in place, you need to be PCI certified, which is why a lot of smaller businesses just go to a provider, but bigger businesses it's worthy your time to set yourself up like this. There are legal requirements for the controls around it, which means if you're building in a serverless design, regardless of the cloud you're using, the aspects of your design that are processing payment cards, so processing MasterCard, Visa, Amex, need to be on services that are also PCI certified. So you can't achieve the certification if the service you're building on isn't also certified. So there's that aspect of it in general, that you need to just sort of go with that compliance, but it's really tricky because it comes down to what do you want to do versus what do you need to do?
And that sort of, it's a difficult thing to respond to because sometimes there's very real legal penalties for not complying. But the good news is from the serverless aspect is that, the shared responsibility says that you're always responsible for two things, configuring the services you use. Right? So all the providers give you a little knobs and dials that you can change, so you can encrypt or not encrypt, you can optimize for this or that. You need to understand and configure that service, but you are always responsible for your data, always. There is at no point you cede responsibility for your data.
If you leverage a third party... So if I'm the business and you're my user and you give me personal information or information, you want private, I am on the hook for it, regardless of who I use behind me, it's me. So I need to make sure that the services I'm leveraging in my serverless design, have the controls that I'm comfortable with to make and follow through on that promise to you as a user, and that changes but it's always you, and you need to verify as a builder that you're leveraging services that meet your needs.
Jeremy: Yeah. So you mentioned two separate things. You mentioned compliance and you mentioned sort of legality or the legal aspect of things. So let's start with compliance for a second. So you mentioned PCI, but there are other compliances there's SOC 2 and ISO 9001 and 27001 and things like that. All things that I only know briefly, but they're not really legal standards. Right? They're more of this idea of certifications. And some of them aren't even really certifications. They're more just like saying here, we're saying we follow all these rules. So there's a whole bunch of them. And again, I think what, ISO 27018 is about personal data protection and some of these things, and their rules that they follow. So I think these are really good standards to have and to be in place. So what do we get... Because you said, you have to make sure that your underlying infrastructure, has the compliance that's required. So what types of compliance are we getting with the services from AWS and Google and Azure and that sort of stuff?