DEV Community

Serverless Chats

Episode #88: Azure Functions with Jeff Hollan

About Jeff Hollan

Jeff Hollan is the Principal PM Manager for Serverless Azure Functions. He started his career at Microsoft in IT and spent a few years managing and building enterprise applications. He is always developing and shipping solutions on the latest tech and is an active member of the serverless tech community.


Watch this video on YouTube: https://youtu.be/ZDVB0AsYDcs

This episode is sponsored by New Relic. Sign up for free at newrelic.com.


Transcript

Jeremy: Hi everyone. I'm Jeremy Daly, and this is Serverless Chats. Today, I'm joined by Jeff Hollan. Hey, Jeff, thanks for joining me.

Jeff: I'm thrilled to be here. Thanks for the invite.

Jeremy: So you are a Principal Product Manager at Microsoft Azure. And I'd love it if you could tell the listeners a little bit about yourself and what you do as a principal product manager at Azure.

Jeff: Sure. So I've been at Microsoft now for a little over seven years. About five years ago, I switched to focusing on serverless. So I was one of the original members when Azure was like, "Hey, we want to try to go bigger in serverless." So spent some time in a different product called Logic Apps, which has serverless workflows. And then for the last three or four years, I've been running the Azure Functions Team. And so my day-to-day entails understanding a little bit about how the products being used, talking to customers, and then helping formulate the backlog with our engineering team and deliver features to hopefully make people's lives easier with serverless.

Jeremy: Awesome. Well, so I'm super excited to have you here because I think I talked to you a year ago at ServerlessDays Nashville.

Jeff: Yes.

Jeremy: I was talking about having you on the show because Azure Functions and what Microsoft is doing with serverless is, is absolutely fascinating. If there was anybody else who's in the space race against AWs when it comes to the advancements in serverless, I would think that would be Microsoft Azure. And it's pretty exciting because I feel like you are doing things differently. And I've had a conversation with people from IBM Cloud and Google, and of course, AWS, and everybody is doing things slightly differently. So I'd love it if you could just maybe give a quick overview of what Azure Functions are and the general serverless offering that Microsoft has right now.

Jeff: Sure. Yeah, so I guess the best place to start is Azure Functions. And you can in many ways think of it like AWS Lambda. To your point, there are some differences here and there and I'm sure we might even highlight them as we go.

Jeremy: Sure.

Jeff: But at its core, hopefully it is the same. I want to write some event-driven compute. Here's my language of choice. Go ahead and publish it and have it, do its serverless scale option. I think some of the things that folks notice from the get-go, there's a few application concepts that are a little bit different. We enable you to develop and write in what's called the Function App. And so you can actually create four or five different functions that are one deployment thing. And then those four or five functions can scale with each other.

But the other one that I always tend to talk about a lot is just the other supporting products that are around. So you've likely heard, and people who've listened to this have likely heard by CAF, serverless is more than just FaaS. But when you think about the supporting pieces of technology, whether that's serverless workflows with Logic Apps, whether that's Stateful Functions with Durable Functions, going into, I guess the NoSQL database Cosmos DB has a serverless skew. So that's oftentimes where we end up talking a lot more as saying, "Hey, FaaS and functions are going to play a critical role, but it's all these other supporting pieces too that you'll start to see those differences as well."

Jeremy: Right. Yeah, and I think that, again, serverless, at least the evolution of it and what I always think about is it's event-driven, like you said. And so you're getting these events. And in a Microsoft Azure or Azure Functions, they're called Triggers. And again, if people don't ... I'm hoping that people listening to this podcast, they know what serverless is. They know event-driven compute. At least they get the idea of that. But basically it's something gets triggered, a queue is written in it. And that the triggers that Azure Function a or database record is written in it triggers that, or somebody uploads something to Blob Storage. So those are your triggers. But something that's really interesting, and I'd love to know more about is this idea of bindings. So what's the difference, because I understand triggers, but what's the deal with bindings?

Jeff: Yeah, bindings are ... There's two different types of bindings. So there's input bindings where it passes data into your functions, and an output bindings where it's going to write some data. So in the same way that you have this big list of triggers, like I want a trigger on a queue, I want a trigger on a storage account, you can have bindings that talk to these different services too. And in a similar experience to triggers, you don't write that code. So like the best example is, let's say I want an HTTP trigger, so I want my function to trigger on an HTTP request.

Jeremy: Yeah.

Jeff: But maybe that HTTP request has something in the path where it has like a customer ID. So it's like when they call it, the path's going to have a customer ID. And that customer ID has a customer record in my database. And rather than having the first few lines in my function be like, okay, parse out the customer ID, connect to the database, pull in that customer details, you can define what's called an input binding where you're like, "Okay, my trigger is HTTP. I want to pull in data from my database." And the data that you should pull in maps to the path of the HTTP trigger. So you can do this metadata mapping. You say, talk to Cosmos DB, the NoSQL serverless database in Azure. And what will happen is your function triggers. And it's going to automatically go grab that data from the database, pull it in and stick it into your function for you. So it just injects it in for reference data for whatever else. So that's an input binding.

More commonly, we see people using output bindings, which would be, I guess the opposite of that. You can almost kind of connect it. It's like, hey, when this HTTP request is done, I want to write a record to an event stream like Kinesis or Event Hubs is that Azure flavor or a database. Same idea, you set the value in some variable. And then through metadata, through like this JSON file, you're like, "Hey, when my function is done, whatever the value is of this variable, I want you to go write that to a queue message or to an event message or something else. So, they're totally optional. You don't have to use input bindings or output bindings, but in the spirit of serverless, people are like, "Oh, less code. That's great. If I'm pulling data in from a database or sending something out, maybe I could use these bindings instead."

Jeremy: Yeah, and I love this because I have been asking for this type of functionality from another cloud provider for quite some time. But I love that idea because functions as a service generally are supposed to be at least stateless. So they're not supposed to ... You're supposed to be able to spin up thousands of these things. And every time you spin up a new one, there's nothing in there. It's just your code. So you have to go and retrieve data somehow. So if you do pass in an ID for a customer, usually your first bit of code is that boilerplate that has to go and look up that customer record, download that data into the function, then do what you need to do.

And then oftentimes, you want to send that event off. You want to queue that for some additional processing. And then maybe you also want to return event, something back to the HTTP request so that the customer gets something. That's a lot of extra code that you have to write. So that's really cool that you can do that just with configuration basically. And I guess one of the questions I have is, I get being able to write to maybe your own services, like write to a queue, or write to an event bus, but what about to third-party SaaS services?

Jeff: Yeah, we have a few of those, not as many, but there are a few output bindings for services like Twilio is one that I use for a few of my stuff where, same idea, but instead of saying like, "Hey, whatever I set to this variable to a database," we have a Twilio binding or a SendGrid binding that's like, "Hey, this is the variable that will give you the details of a text message that I want sent to a mobile device." And it will integrate with that as well. So you can pull in these different extensions, is what it's called, the view, trigger and binding functionality. And so there's some dozen or so extensions today, including things to like Microsoft Services and elsewhere like Twilio that you can use that can just reduce that code.

Jeremy: Oh, that's amazing. So another thing I noticed about with serverless in general, especially with FaaS. And you said FaaS is much, or serverless is much more than FaaS. But I often see when people are new to it that they say, "I'm going to take my application. I'm going to stick it into one function. I'm just going to let it all run in that one function." And they don't really take advantage of some of the other trickery that is available to them. So I'm curious with the ... And I want to talk to you about composition in a minute, but I'm curious which bindings are people using? This idea of input and output bindings are super powerful, reduces code dramatically, but are people using that? What are the common ones that people use or is there a lot of fall-back to the whole monolithic functions as a service?

Jeff: In general, I am always surprised with how many people use bindings and tell me, "We love bindings."

Jeremy: Okay.

Jeff: A part of it is that as convenient as that sounds to have a variable that you set in something else to storage, somewhere that sometimes folks will hit the boundaries of bindings is, what if you want a little bit more control over for the thing you're doing? So let's imagine we don't have an output binding today for SQL, but let's say you were talking to a SQL relational database. Yeah, it'd be cool to set a variable and it goes to SQL relational database, but what if you wanted to execute something like a stored procedure? Or what if you want it to have a little bit more control or stream the data into a Blob instead of just sticking it in a variable? You can't do that with output bindings. And so we usually just tell people, "That's okay. Just use the SDK." But people are like, "Oh, we love output binding."

So, the most popular ones are probably queues. Queues are just such an important part of serverless when you're distributing things using that message broker. So I think queues takes the cake for us. Storage is probably just another useful one, storing some whatever here and there. And then the final one would be database. But I would guess, and I haven't looked at the binding data specifically. I can think of our trigger data a little bit more clearly. But I would guess that like queues is twice as popular as the next thing down when it comes to what people are integrating with from their functions.

Jeremy: Yeah, well queues, any ways you can use those in multiple directs, especially if you're just trying to minimize downstream pressure and things like that. There's all kinds of reasons why you would use that. But actually speaking of something like downstream pressure, so what happens if there's an error? Because obviously you do a lot of error handling in your code. That's something that a lot of people do. Now, the talk that I gave at Nashville at ServerlessDay Nashville was about not putting error-handling in your code and instead using the features of the cloud to handle some of those errors for you. So, what are the error-handling capabilities in inputs and bindings?

Jeff: Yeah, this is another one where there's this give and take because the way that it ends up working behind the scenes, since you're almost forced to go into this world where you can't do the error-handling because the way the platform is ease, that it's like, your execution is done. I see you have this value in this local variable. I'm going to go take it from here. But if there was some issue that happened, there would be log messages and metrics that got admitted, but your execution finished. You can't really go in and try and catch it and redo it. So you end up doing this types of compensation type stuff where you are getting that alert, or you're getting that metric, and that event that says, hey, the output binding failed.

But that's also a reason where we see, again, bindings are super convenient, but I don't want folks who are using Azure Functions to think, "Oh, well, if I'm talking to a queue, I have to use bindings," because some people are like, "I actually want to have a try-catch and maybe retry it a few times in code." We have some retry policies that will let you provide, like retry something three or four times. But for the most part, you end up being like, "I've got to make sure I'm keeping an eye on those logs when I'm using something like output binding." So definitely a consideration where it's like, "Okay, maybe the convenience might not work for this scenario if I need a little bit more fine-grain control."

Jeremy: Right, Now with the retry capabilities is that something, though, if I try to write to a queue with a binding, is that going to try multiple times or if it fails, I'm going to get a warning?

Jeff: We'll try multiple times and then when it fails, you'll get that warning.

Jeremy: Got you. Okay. Interesting. All right. So are there any best practices though? You mentioned this idea of, if you need more fine-grain control, then just write the code yourself. I would be super happy if it was like, we'll add more fine-grain control for you, so you don't have to write the code yourself. But what are the best practices? When would you say use a binding versus writing your own code? What is the fine-grain? What's that line, I guess, for that fine-grain control?

Jeff: Yeah, I think people usually bump it pretty early on when they're trying to ... like bindings ... All of the binding info is metadata. What it writes to, how it writes to the name of the stuff that it's creating, the content it's pulling from your own variable, but all of the details for a storage blob, what's the name for the file that I'm creating in your storage account? That's usually defined through metadata. And I mentioned at the beginning, you can pass some of that through. You can be like, "Oh, well, the thing in the path parameter make that the file name," but you're still limited in all the things you can control. So usually once you start bumping up against that, and there's even patterns where people do these gymnastics, is what I would almost call it to get everything to work. But once you start bumping up against those types of limits, if you end up find yourself being like, "Oh, I really want to do this thing with binding, but it's not super convenient," you're almost going to be better off at that point to just use the SDK.

Now, to your point, I wish ... there actually should be a little bit of a cleaner wrap to saying, "Okay, well, can you at least get me started with getting that SDK?" Because there are some best practices to, I think, shared the same with Lambda. Connection reuse is the one I see the most often biting people in the butt, is where they're connecting to a database, and they're creating a new connection for every single execution. And what you want to do is move that connect. Yeah, don't do that, no matter your provider, where we want to reuse things. So that's the big one.

But again, to your point, there's a little bit of a cognitive leap there for folks, where they're doing all this through metadata. They're not really thinking about the database, they're just setting a variable. Now they've hit a bump, whether it's around error-handling or around configuration, where you got to make sure you use those SDKs the right way. And candidly, it's like, hopefully you've read the docs, or else you might end up moving from bindings to shooting yourself in the foot.

Jeremy: Right. Yeah, well, reading the docs is always good advice. And ...

Jeff: And everyone does it, right? Yeah.

Jeremy: Right. Exactly. Just like you read your iTunes terms of service.

Jeff: Exactly, right.

Jeremy: Yeah, so another thing about serverless that I think gets a lot of criticism is the idea of cold starts. And certainly functions as a service, it's on demand. That's the greatest thing about it, is that it'll just scale and scale and scale. But there is that penalty in the beginning when a new function, a trigger comes in, it needs to warm up a container or whatever it is that's running in the background. So how does Azure handle, or how does Azure Functions handle cold starts and how much of an impact do you see that affecting your customer's use cases, I guess?

Jeff: Yeah, cold start is the final boss of serverless, it almost feels like. And I really want to see better progress on cold start across the board. And this is like an area of incredible innovation, too. If you look at some of the numbers of AWS Lambda, Google Cloud, and Azure Functions, it's pretty impressive what they can do, but it's still a challenge. And honestly, we're constantly working to get our numbers down.

So I guess there's two answers. One is, how do we help get the numbers down? And then the other one is, what happens if it's just too much to handle? And you have no tolerance for it. So we employ a few things. I think a lot of these are fairly similar, though. I don't actually know how Lambda Google Cloud Functions run behind the scenes, or other providers who might be doing. But a few things we do, we employ this concept called Placeholders, where rather than like seeing file new VM, or file new container whenever there's a function, we actually have this pool of containers that are already running the language, they're already running all of our bits. And then we just hurry and pop your code and we mount it as a zip, and we try to start it up as fast as possible.

In the last six months, we actually have been rolling out some machine learning, too. So we've got some folks in Microsoft Research who worked at looking at a bunch of historical data for functions. It's actually all open source. It's anonymized. But if you go to GitHub, you can actually see a bunch of Azure Functions anonymized data.

Jeremy: Awesome.

Jeff: And they trained a bunch of models. So that hopefully, Jeremy, if you were using Azure Functions and it's Monday at 8:00 AM, that our model, hopefully, would get smart enough over time to say, "Oh, there's a 70% chance that at Monday at 8:00 AM. Jeremy's about to hit this thing. We're actually just going to warm it up before he even executes it." So that's something that we've been rolling with for a while. But even then the ... And then just trying to make progress on the underlying technology, the underlying platform. There's a lot of components to building a multi-tenant secured service that all add a little bit of a national latency.

So something we're aware of. And then I guess to the second part of that question is, we do have some options to fully mitigate it or partially mitigate it. The one is the fateful pinger. We have folks, I mentioned, you can create this Function app concept. You can have multiple functions in there. One thing that even I have done, and I would say don't quote me on this, but I'm on a podcast. Now, my name's right there. You can create another function in that same app that triggers on a timer. So a timer is a first-class concept in Functions. Just have that thing trigger once every 10 minutes, and your whole app is going to get poked every 10 minutes by us. You don't even have to poke it. We'll poke it ourselves on that interval and keep it warm.

Jeff: And then the final option, though, I say this last for a reason because there are cost implications too, is you can deploy your function in the Premium Skew, which lets you preallocate and prewarm, where we will keep it warm, not just by poking it. We'll actually just keep the process running 24/7. But you're paying now for that consistent compute across that time.

Jeremy: Right, Well, I'll tell you one thing. If anything that came out of this conversation is, I am now going to code name cold starts "Bowser." That's what I'm going to ... I'm going to use that now. And if anybody asks me, I'm just going to say Bowser. Awesome. All right. So what are some of the other Serverless services that are in Azure? Because a big part of it way beyond FaaS, this idea of managed services, whether it's databases or Blob storage or things like that, there's such a blurry line now in some cases. Some things are sort of serverless. People put serverless on it so that it sounds good, I guess. But what are some of the other major ones that are available in Azure? And that, again, it's a first-class citizens, I guess, with the Functions?

Jeff: Yeah, and you alluded to it at the beginning too there's almost. And I imagine your listeners would fall in this as well. There's almost the purist view of serverless, and we'll start with those services. And then there's that if you went to the azure.com/serverless marketing page, you'll see a lot of services that we could have a very good conversation on like, "Well, how serverless are they really?" But in terms of the traditional definition of serverless, the only pay when you use it, those things, the service that I see paired with Functions the most and for good reasons is something called Azure Logic Apps, which you can think of it in some ways like AWS Step Functions, if you're familiar with that. Same underlying concept where there's a declarative workflow definition being created, and it's going to help go and orchestrate something for you. I think that the things to check out if you haven't before with Logic Apps is the first, it's got a visual designer that you can do in the portal, you can do in Visual Studio Code. So that instead of crafting that JSON, which everyone loves writing JSON,

Jeremy: Yeah, we like it.

Jeff: Almost as much as they love writing YAML, instead of writing that JSON, you're just saying like, "Do this, add a parallel step here, do that, and the other." But the other one that a lot of folks find use from is, it's got all of these connectors. So there's like 300 plus. I think we might've just crossed 400 connectors where maybe I'm calling function, function, function, and then I want to drop some data in Salesforce, or I want to update a Google sheet. There are connectors for all of these different services that you pop in that workflow too. So Logic Apps is the one that there's a tight pairing with, especially when you want to integrate with other things, or orchestrate stuff that works out really nice.

Jeremy: And what about the Cosmos DB and things like that. And then you have Azure, I think it's just called Azure Blob storage. A good naming there because that's what it is.

Jeff: Yeah, when I was learning the cloud, though, I was so confused when I'd be like, "Get a blob." And I was like, "What is a blob?"

Jeremy: Right.

Jeff: It's just a file. It's just some random bit of binary data.

Jeremy: Right.

Jeff: Oh, yeah. Blob, I think stands for something I don't know.

Jeremy: Yeah, it probably does. Yeah.

Jeff: I really do think it's an acronym, which I should know. Yeah, so Cosmos DB, you can think of it. Again, I know a lot of folks listening are familiar with AWS. So this is similar to DynamoDB. Obviously, there's going to be differences here and there, but there is a serverless skew in that one so that you pay for your read writes on-demand and you get some free tier. Azure storage itself doesn't have a free tier. I would imagine similar to S3, is how you can think of that. Though, it's just so inexpensive that you're paying some fraction of a penny here and there. Another one too, if I'm not to ... So API Management, similar to API Gateway, that's got a consumption serverless to you. And then Azure Event Grid, which is similar to, is it EventBridge?

Jeremy: EventBridge. Yes. Yeah.

Jeff: Is that AWS EventBridge?

Jeremy: Yeah. Not to be confused with Alibaba's EventBridge ...

Jeff: Oh, wow.

Jeremy: ... who has something that integrates with AWS EventBridge, which is very confusing. Yes. So Event Grid is the Azure one.

Jeff: Yes, that's right. Event Grid lets you do some sub stuff. And that it also pulls in events from other providers as well to trigger your serverless stuff. So those are the core traditional way. I guess SQL also has a serverless skew as well. So if you just want to SQL database, there's a flavor that will go auto-scale to zero, and you only pay per transaction.

Jeremy: Awesome. And they all have ... They're all integrated with Azure Functions. There's all bindings and triggers and things like that.

Jeff: Yeah. Yep. Exactly. So hopefully, it's not too hard to use them together in building a solution. Between all of those building blocks, you can put together some pretty impressive things that if they're not being called don't charge any money.

Jeremy: Awesome. Cool. All right. So I want to move on to composition. So function composition. This is something that ... This is maybe whatever the level is before the final boss. Because there are a lot of attempted solutions to this. And by "function composition," I mean this idea of breaking individual functions into very discrete pieces of logic. So maybe I have one piece logic that just calls an SDK or an API somewhere and downloads that data. Maybe I have one that just does some encryption algorithm for me. Maybe I have one that pulls data from another data store or writes data to a queue or something like that if I wasn't using bindings.

So I might have all these different functions that do very specific pieces of business logic for me. And I want to compose them together because I want to reuse them. And the step functions you mentioned, which is the AWS concept for this, they just recently released this Synchronous Express Workflows so you can actually glue them all together in a synchronous pattern and have it return data immediately, which is cool. Of course, there's a cold start issue and some of those other things that come into play there. But what are some of the options in Azure, because you mentioned Logic Apps, which is a really cool feature, and it's almost like a no code or very, very low code solution to that. But what else do you have because you've experimented and have some other products that do this composition?

Jeff: Yeah, and if there's one really interesting piece of tech that's being cooked up in Azure Land that I think all folks should just look at and pay attention to because I do think these types of tools are going to spread beyond, it's durable functions or stateful functions. And there's even some open source tech too that plays very similar roles. In fact, some of the people who built the tech for Durable Functions are building things Temporal Workflow or Cadence Workflow. But what this lets you do, it's almost bizarre how it does it. And so this might be something that you want to go look it up.

You can write ... So to your point, Jeremy, let's say I've got my six different functions that are all doing their own thing. One of them is if I'm doing an order processing pipeline, one of them is get product details. The other one is, create a shipping label. The other one is charge the customer, blah, blah, blah, all these individual pieces of unit, and I want to compose them together. I can create this special type of function called a Durable Function where I write using code that process that I want.

So I could be writing in JavaScript code, in Python code, in .NET code and say, "Okay, first thing, call the function that gets the details. Once that's done, call this function." In the same way, just the same way I would almost call this API, call this one. So I'm writing in code to call these different pieces. I can write loops. I can tell it in code to do the loop in parallel. I can tell it to do the loop sequentially. And so I can orchestrate these processes that can run for weeks at a time, for months at a time. Maybe they only take 15 seconds. And it will go ahead and compose it for you.

And behind the scenes, what it's doing is more or less the same thing that you're doing by hand if you're not using Durable Function. It's storing things in storage. It's storing things in queues. It's queuing these things up. So it's not ... We're not letting you create a function that can actually run for 45 minutes or that actually waits there and double charges you for those calls. It's just this special function that's doing all of this state management for you behind the scenes to let you actually compose this in code. So you end up having something similar to a logic app or a step function. But in this case, it's written entirely in code. The same qualities, though, of a function, it only charges you when a step's actually executing.

So for example, one of the ways I used to Durable Functions is to manage my resources. I spin up new stuff all the time. I'm like, "Oh, a new product got announced. I'm going to go give this a ride." I'm not as good at deleting them after the fact though.

Jeremy: Right. Yeah. Deleting them.

Jeff: Sometimes I get a nasty scare where I'm like, "Oh yeah, I was trying out this new database thing, and now I have this bill." So I have spun up a durable function in my subscription where whenever I create a resource, it triggers this dribble function, and it sets a timer on itself. And it's like, "Wait for a day. And after a day, send me a text message and see if I want to extend it longer. But if not, just delete the resource for me automatically."

Jeremy: Right.

Jeff: And so in theory, this is a function that is running for a day or longer, but I only pay for the few seconds at the beginning where it triggers and sets the timer. And then I pay for a few seconds a day later when it wakes itself back up and sends me that text message. So you can do these really interesting patterns where you're composing, or managing state, or doing things more long running with this Durable Functions product. That's also, you can see all the code on GitHub too. So again, it's very interesting. A little bit complicated when you try to figure out what's actually happening here, but worth keeping an eye on.

Jeremy: Awesome. And so can you do synchronous and asynchronous with those?

Jeff: Yes. So same idea, you alluded to it with step functions, I imagine, I don't know the ins and outs. You can say, "Hey, start this orchestration." And then synchronously returned back to me a response 5, 10, 20 seconds later. Or the default behavior is async where you kick this thing off and it's going to immediately return to you back like, "Hey, we started your thing. Check this end point for the status." And then you would just pull that endpoint so you can control do you want it to hang out and wait, and then send you back the eventual response, or just go run its thing off in the background?

Jeremy: Now, when you're doing the Durable Functions, are you calling other functions that you've already written? And so then do you have control over resource management of those individual functions? Or how does that work?

Jeff: Yeah, so you can call some function anywhere in your subscription as long as it's got an HTTP endpoint. So if it's like an HTTP trigger, there's just like call this HTPP function. Or you can write what are a special type of function. We have a trigger called an Activity Trigger, which are functions that are intended to only be called from a durable function. And so you might have functions that are like, "This is always going to be called from a durable function." There's no HTTP endpoint. It's not listening to its own queue. It's just called an activity trigger, and you trigger them off too. And it's almost the same way as, you don't have any more control over it necessarily than if you were just composing things yourself. We'll just, hopefully, make it easier to reuse them across your account.

Jeremy: Right. So is this something that's a replacement for Logic Apps, or is there ... How would you choose between the two?

Jeff: The guidance that I give the most, and that I see most people falling not for, I'm not trying to trick them, I really want people to be successful, but falling into, it's personal preference. It really becomes a ... There's almost ... We joked about this when we were talking about this podcast. When you think about things of, in AWS, writing CloudFormation templates or writing CDK stuff.

Jeremy: Right.

Jeff: And there are strong opinions on both sides of like, "Hey, the eventual YAML is the best thing. And no, it's so much more convenient to express things in code." It's a similar type of ... It's like, do you want to describe your composability through code and through JavaScript code? Or do you really want to do it in some declarative WorkFlow-y state, the machine language thing? Whichever one of those you want, you can do very similar things across both. There's differences. Logic Apps has all those connectors. So if you want to use one of those connectors, that might tip the scales to Logic Apps. But in general, it's the personal preference is what I end up telling folks to choose.

Jeremy: Right. And are there limitations to this? If I use Logic Apps, would I run into some limitations maybe for latency, or maybe what I can do, or same thing with Durable Functions? Or is it something also where I'd be better off maybe if I only had like two or three steps, maybe just composing those all into a single function themselves rather than adding that extra latency from a Durable Function or Logic App?

Jeff: Yeah, there's a few here that pop to the top of mind. Logic Apps ... The pricing is a little bit different. In dollar for dollar, I would think Logic Apps would be a little bit more expensive than Durable Functions. So if you're optimizing for price, the Durable Function one will be the one you want to go. In terms of scale, they're pretty related. The bottleneck that folks end up running into with Durable Functions, and this is probably deeper down the line I imagine most folks who are listening to this are just getting introduced to it. But I mentioned behind the scenes, it's storing a bunch of state and pulling state for you.

Jeremy: Yeah.

Jeff: You give it details of like, this is my storage account, and this is my queue that I want you to use. And we'll go use that to store the state. If you end up doing really high volumes, like thousands and thousands of these durable orchestrations a second, the underlying storage account will actually start to be like you're reading and writing a whole lot of stuff. You only have a certain amount of limits here. So you end up having to either move to a premium skew or having to rearchitect in a way where you can chart a little bit better.

And Logic Apps has some stuff because it's this managed workflow service. I think you would actually get higher, in theory, scale numbers in a Logic App than a Durable Function. But again, the real bottleneck's going to be that underlying state and that underlying storage account. So a few things to consider there as well. Latency, I think is pretty similar between the two. And that both of them are writing data between steps. So there's a little few milliseconds between each of those functions as it coordinates itself in store state. I think they're going to be pretty comparable though.

Jeremy: Right. And I was going to actually ask you about the billing for that, because I know some of the other services will charge you, and some of the other clouds will charge you every step you take, you get charged there, then you get charged for the execution of the function or whatever the services that it's executing. So how does the billing compare? You said that Logic Apps were maybe a little bit more expensive. That might be because they bill per step ...

Jeff: Exactly. Yeah.

Jeremy: ... whereas Durable Functions just bill execution time?

Jeff: Yeah, exactly. Logic Apps is a per action charge. So every step is a charge. Durable Functions is charging you for the gigabit seconds that you use when a process is actually happening. So the thing to note here is that, when no ... The Durable Function is just in charge of coordinating what step do I do next. So during that period of time where it's deciding what's the next step I do, you're paying for the gigabit seconds. When another step is actually running, or if you've added something a delay, there's no gigabit seconds running at all. So it's a little bit closer to serverless price, or I guess, Azure Functions' pricing, or AWS time to pricing. And then the only other thing is, that underlying storage account, that queue, or that blob store that you've connected to it that it's using to store and retrieve state, there's going to be some cost associated with that as well.

Jeremy: Right. And the billing increment. Is it still 100 milliseconds?

Jeff: So, yeah, it's the minimum of 100. And then above that, it's milliseconds.

Jeremy: Oh, right.

Jeff: So it's the lowest that you'll pay for an execution a 100 milliseconds. If it only lasted 50 milliseconds, you'd be charged 100. But everything on top of 100 is by millisecond. So you might be charged 104 milliseconds, but you would never be charged for 79 milliseconds.

Jeremy: Awesome. Cool. All right. So let's move on to operational, the operational aspect of this because that's another thing I think that hangs people up just when they're getting started with serverless is, they're like, "Wait a minute. Where do I FTP my code to? Or where do I put my container?" It's a different mindset, I think when it comes to writing and packaging serverless applications. So you have a bunch of really cool tools. I think the biggest thing you've got going for you is, you created VS Code, right? So it's one of the most ...

Jeff: Not me personally, but yes.

Jeremy: Not you personally. Yes, I know, but ...

Jeff: Folks I worked closely with.

Jeremy: Right. So, you own a majority of the ecosystem in a sense in terms of the IDEs that are out there. And I love VS Code. And I honestly fought it for a couple of years because I was just Sublime. I just used to use Sublime and it was like, I don't know. And then I switched from Sublime to ADOMD. And then I was using ADOMD, and I was like, "ADOMD is great. And then next thing, people are like, "Oh, VS Code, VS Code." I finally used it, I'm like, "Why did I ever use anything else?" So tell your colleagues there, great job on that because I do enjoy VS Code. But you've built in a bunch of things into the VS Code and of course, there's extensions and other people have done this too. AWS has some extensions as well.

Jeff: Yep.

Jeremy: But just how easy is it to write a serverless application in Azure now?

Jeff: Yeah, this is one area where I feel ... Similar to Durable Functions. This is actually a pretty slick story, comparatively speaking. Yeah, we care a lot about things like Visual Studio Code, and Visual Studio for our friends in .NET land. Obviously, we want to make sure we support whether you're using PyCharm or Sublime or whatever else. But that's where we're focusing a lot of our optimal experiences on. So to your point, there's extensions for pretty much every cloud provider in VS Code now, which is one of the reasons that's been a runaway success is. Whether you're developing on Lambda, or you're developing on Azure Functions, VS Code can still find value. That said though, for the getting started experience.

So yeah, you pop in the Azure Functions extension, it's going to help you create that first project with a set of templates. But the thing here that folks might notice when they're coming from other providers is that once I get that first thing set up, I have my project, you push a five, and you'll see a bunch of crazy stuff happening in the console where the functions runtime will spin up on your machine and give you this really light debugging experience. And I don't mean light, and that it's a subset of the features. I mean light in that, it's just like a little CLI tool that powers this debug experience. You're not dealing with Docker containers. There's also no emulation involved here. So one of the things with Azure Functions is our runtime. The thing that actually triggers your code that runs all those triggers and bindings. It's open source on GitHub. It's cross-platform, it runs on every OS, it can run in a container.

And so that local debug experience is actually powered by the same runtime that we're using in the service to trigger your code. And so it's whatever, a couple of megabytes and installs through the VS Code extension. And now you're just debugging. You go drop a message in that storage account, and you'll see your trigger there on your VS Code box execute, and it will pass in the data and it will hit your breakpoint. And so that really tight development loop that feels a little bit closer to developing an Express app or a Console app or whatever else you might be doing, is something that a lot of folks enjoy about that functions development experience.

Jeremy: Right. And then right from there, you can just publish it, right, into production?

Jeff: Yeah, publish it or stick it in a container and publish it, pop it up there and then we'll just run the same code in the cloud for you and do the serverless stuff.

Jeremy: So what about the CICD process for that? Because one of the things, I think you and I have talked about this before, infrastructure is code for Azure. There is a solution for that, but when you're building a serverless function, you would use a different way to do that?

Jeff: So yeah, you could. The infrastructure is code. The final step, the last thing is Azure Resource Manager or ARM, which you can think of it like cloud formation. It's pretty, and I don't know how unique this is to Azure or not. It's pretty not pleasant to look at.

Jeremy: Well, Azure is not alone because other infrastructures code management systems are just the same, right?

Jeff: So most of the tooling, in fact, all the tooling will generate those things for you. And then you can use them to help deploy your systems for CICD. Still something good to know about and realize that it's there. Though, often folks are either using like our own tooling and VS Code or things like Serverless Framework, so that you can do this with Serverless Framework and have a different abstraction.

In terms of CICD though, one of the ways that I've started to do this recently, and I know it's not fully in the deploying the function everything too, but at least in the CICD process, there's this new experience we just released a few weeks ago where, let's say I go through that local experience. And instead of publishing to Azure, I check it into Git or GitHub specifically. So I just checked my code in to GitHub, either a private or a public repo. I can then just go party over into the Azure portal and say, "Hey, go to this GitHub Repo. This is actually the source that I want for my function. And go connect it to a function that I actually want running in dev or prod or whatever else."

And it will go and create, GitHub Actions for you, set up a CICD pipeline. And so that now rather than manually doing the publisher, the builder, the test steps, I just check code and open a pull request and merge it into the main branch. And it's going to actually go and build and deploy and publish my function app using GitHub Actions. But you didn't have to go figure out how to go, how do I create a CICD pipeline and GitHub Actions? We'll just go wire one up for you that's got all those best practices baked in. So ARM templates have to be aware of third-party services like Serverless Framework that can make your life easier or Terraform. And then finally things like GitHub Actions. We see a lot of people moving towards now as well.

Jeremy: Right. Yeah, and I know the Serverless Framework worked very closely with a team from Azure to build in a lot of that functionality. So that's definitely a cool way to do it. And also another smart strategic move from Microsoft probably was buying GitHub.

Jeff: Sure.

Jeremy: There's a lot of tooling that can be done with GitHub Actions, which are really cool. And actually I've been meaning to dive into those more, but that's awesome. So, all right. So then in terms of the hosting options, you had mentioned earlier, I guess the professional level or the professional tier there. So what are the hosting options? Because I know you have the on-demand, but then you mentioned the professional tier. What does that mean? And what are some of the other options?

Jeff: And I'm realizing that we should have called it the professional tier. We should Azure Functions Home and Azure Functions Professional.

Jeremy: Oh, is it premium.

Jeff: It's premium. Yes.

Jeremy: Oh, it's premium. Sorry.

Jeff: Professional sounds so much better. No. I was like, "Oh yeah, Microsoft, we love having professional skews." We really should have done that.

Jeremy:
Right. Right.

Jeff: Yeah, you're right. So the skew that the majority of our users are on and for good reason that most folks who go build functions is what's called Consumption or Consumption/Serverless.

Jeremy: Yeah.

Jeff: And then, yes, the next click-up is this premium tier, which as the name would entail, comes with a few premium features, including no cold start functions that can execute indefinitely. There's no capital limit on how long they can execute, and you get some bigger beef in your hardware. But it also costs more because it's a premium offering. So that's why a lot of folks choose that Consumption one.

There's a few other nuanced hosting options, which I'll just mention and happy to go into them. One is that, there's another service in Azure called App Service Web Apps or Web Apps, which has been around for a long time. It's for hosting websites, similar to like Elastic Beanstalk in AWS or App Engine in Google Cloud. We actually have a hosting option where we've got a lot of folks who are using functions alongside their websites to do background processing. So you can actually choose to deploy your function into one of those plans for your website. And it will run in the spare compute of your web server. So in essence, it's free, your functions are free. You're still paying for your website though. So you're paying for your website thing, and then we just run your functions in the cracks of that. So that's called hosting it in an app service plan if you want to even cut down costs more, or just keep it warm alongside your web code.

And then finally, there's, I mentioned Azure Functions Runtime is open source. You could stick in a container. We've got a few folks who, and it's an interesting reason of why, and this is an evolving story, but a few folks who grabbed that function, they stick in a container and they'll actually go and deploy it into a Kubernetes' cluster. So we've got a few folks, not very many, I think in terms of a ton of people in Consumption, a few people in Premium, a little bit of people running in an App Service plan, and then a small fraction of folks who are like, "I'm actually going to go run this in Kubernetes." I think that captures the big spectrum.

Jeremy: Yeah, and actually that App Service plan you mentioned is fascinating because one of the things that ... I had a long conversation with Paul Johnson actually about the greenness of serverless, running less servers because you don't have to power as many machines. And that idea of running things in the cracks of other people's or of your own servers, I guess, is really interesting. And it reminds me of spot pricing or spot instances that you can do on AWS. But I'm going to give you a free idea here. You should let people who are running their own web apps rent out their free compute to other people who want ...

Jeff: Oh, sure.

Jeremy: ... right. So then you could potentially lower your costs for hosting your own app.

Jeff: Wow, that's cool that. Yeah, it's almost what my internet provider tries to do without me getting money from it where they'll let other people ...

Jeremy:
Right, exactly.

Jeff: Yeah, no. There's another element to that we've talked about as a team a few times where it's like, I would love a world where whatever compute I'm using, whatever I'm paying for, just be like, go ... If I have a function that needs to execute, go see if there's a crack in my database compute, in my web server compute. Maybe even at one point, there was some experimentation of what if I actually had machines in an office, that I could just say like, "Go run it on some compute somewhere." But you're right. That's an angle I hadn't thought about, which is super interesting, which is, maybe I'm running a web app and I'm like, "Look, I'm at 80% CPU all the time. I'm fine to rent out 10% of my cores for someone." And then they run their function, they pay for that thing. And I get a little bit of chunk of that back to bring down the cost of even the 80% that I am using. That's a pretty slick idea.

Jeremy: I don't know, it just came to me. So #AzureWishlist, I guess. All right. So are there any limitations though that people need to think about when you're building these apps and deploying them? Again, you mentioned different ways to package those, and things like that, but are you limited in any sense in terms of what you can do? Can you deploy other resources as part of these apps? Or where do you bump up against the rough edges there?

Jeff: A few... There's always limitations. I'm just thinking of the ones that I think are the big gotchas for folks. So, one to be aware of is, I mentioned at the beginning, we have this function app concept where you can create multiple functions as the part of the same deployment package. So think about you have a customer's API, you might have "Get Customers, Add Customer, and Delete Customers" that you all deployed in the same app, and it's like, "Oh yeah, that makes sense."

However, where people get into some trouble is, they'll be like, "Oh, I'm going to have 50 functions in the same app. And I'm going to have my customer API. And I'm also going to have my sales API. And I'm also going to have my, whatever orders API, all in the same app." The challenge with that is, behind the scenes, your function app is the unit of deployment. So if you version one thing in your app, you version the whole app. But it's also the unit of scale, which means if something needs to scale or something needs more resources, we do it as one big chunk for your whole app.

So I always tell folks if you're worried about it, just do one function to one app. Then you're writing in a very way to AWS Lambda, where every Lambda adds something. You can do that, that's fine. You're managing more resources, you're managing more deployments. But you know this, Jeremy, you can survive in that world. It's not the end of the world. If you do want to add a few more things, keep it to three to six. Don't start going above 10. It just becomes a little bit unwieldy and you'll start to see things scale a little bit more slowly, or I guess in general, because they're scaling in these big blocks, bigger deployment package, all those things.

The run duration is another one, especially if you're in that Consumption tier, 10 minutes is the max for a function unless you're in that Premium tier. So be aware of that limitation. Yeah, just other ... If you're interested in scenarios, I think a lot of these span serverless providers, but things like machine learning, where you're trying to do really heavy computation, especially one execution, you want to do multithreading, a lot of stuff, you're not going to have a whole lot of success. We don't give you heavy compute power per execution. Really the thing with serverless as we want to scale you out to as many instances as we can.

And so unless you can break it into thousands of executions, don't try to do heavy compute in one execution, even if you're on the Premium tier, even. I just don't think it ... Not that that's a bad pattern in general. I don't know enough about machine learning to tell you if that's anti-pattern or not, but you might want to go look at some other options to find a product that's going to give you a better experience if those are the types of workloads you're trying to run. That's a few that come up. I'm sure there's probably more.

Jeremy: Yeah, no, and I liked the idea too of fine-grained control over each component of my application. So I want to be able to scale this more than I want to be able to scale something else. I want to be able to control of my memory, and some of those other things. So yes, so that makes a lot of sense of breaking them up. Also this idea of, I guess, from a deployment standpoint, things like Canary deployments, or just like moving things through stages and stuff like that, I know there's something called Deployment Slots. I think they're called ... What are those all about?

Jeff: Yeah, so every function you can create at least one slot for if you want, it's most valuable for HTTP traffic. And the way that it would work is, imagine that your function is scaled out across whatever hundreds of cores, it's got an HCPM point directly. You don't have to use API Management or API Gateway with Functions. We also just expose an HTTP endpoint as part of the function.

So let's say you're getting all these HTTP requests that are hitting this thing, and it's scaled out, you want to version it as you mentioned. You could deploy to the slot. And so this thing is still processing like crazy. Your production functions processing like crazy. You deploy to the slot, you could test it out there. It's got its own endpoint at like slash slot or something, make sure things are working. And then you're ready, the real value of it comes when you're ready to swap it into production, because without a slot, what ends up happening is you redeploy on top of the production bits and then it has to restart and you'll see a little bit of a jitter on your execution count as it updates the code. A slot, we just gradually start to route all of your incoming traffic to the slot.

And so there's this graceful handoff where first, it's 10% of your traffic, then it's 20%. We'll just automatically do that balancing for you until eventually your slot becomes the thing in production. And the thing that was in production is now in your slot. And if something goes terribly wrong and you're like, "Oh shoot, I didn't actually test this," you just swap them back. And you can swap them back and forth and just keep deploying your bits into the staging slot and swap them when you're ready to go.

Jeremy: Oh, that's awesome. Actually, another thing I noticed too about when you go to the dashboard and you're watching your functions, you're watching traffic coming to your functions, it actually gives you ... It tells you what server it's running on. And you can actually see that. It's just a level of insight you probably don't need, but I just appreciated when I saw it.

Jeff: There is ... Yes, you are right. In the monitoring story, and by default, we pair with this offering or this monitoring solution called App Insights. You could think of it, again, if you're in AWS land like x-ray and those types of experiences, App Insights does that. As part of App Insights, there's this feature called Live View, and you can see what instances you're running on, how many real-time executions are coming in. It just gives this real time glimpse at your app.

The one reason, though, where in Azure land that actually comes in a bit of value is, one of the differences ... A little bit late to call this one out, but I failed to mention this at the beginning, one of the differences just in how Azure Functions works is that, we don't by default do a single concurrency per instance. And so in AWS Lambda, you have one execution happening on one instance at a time. And that execution has full access to the memory and course that are available. And Functions will deploy you on a server behind the scenes. We'll route multiple triggers to the same instance as long as we can see that it's healthy, and then you can actually grow into memory. And we will only charge you for the memory that you actually consume. So if you only consume 128 megs, we'll charge you for 128 megs. If you have a few more executions or your function app gets more complicated, and you start consuming 512, we'll charge you for 512.

But that does mean that sometimes when you're debugging, there's pros and cons to that approach around concurrency and pooling or whatever else, that sometimes that Live View comes in handy because then you can actually see like, "Oh yeah, behind the scenes in the Azure Data Center, I'm on 100 servers, and this is how many requests are being routed to each one of those servers. And maybe this is why this instance had a time is because maybe I've got too many functions in that app, and there's just too much stuff running there." So some concepts that like, I want to make sure we're improving the system so people don't have to think about it, but it's something that it doesn't hurt to be aware of.

Jeremy: Yeah. It's super interesting because I've watched a video that you've done or one of your demos where you show it scaling. You're just running your artillery against it or whatever and you see it scaling and all these things being added. And actually, maybe that's a good question because this is something I know there are limitations in other clouds. How fast does it scale? So once you're provisioned on a single server or a single server, does it scale rapidly, or is there a ramp-up period that have to abide by?

Jeff: There's some ramp-up period, but hopefully, it's aggressive enough that it works for folks. And in general, we've got metrics from, especially our larger customers who want to do 300,000 requests per second. And they're doing some big promotion. We're like, "Okay, we've got to make sure we do this." Keep in mind the notion that an instance in Azure Functions is different than a single instance in AWS Lambda. But we will add one, at most, one instance every second. So we will add a new, full-powered server behind the scenes for every second. Now, each one of those instances might handle 100 requests. But it does mean when you run a load test or when I run those load tests, you often see a little bit of a spike for those first few seconds where latency's a little bit higher.

If I go throw 10,000 requests all at once, the first 15, 20 seconds, you're going to get a little bit higher latency while we hurry and add and add and add and add those servers. And then usually after that, you'll see it level off. So that one second, at most one second, adding a server is the number to be aware of, but in general, because concurrency can be different. There's not a whole lot you can design around it. But it is something to note of, if you're worried about load, probably run a load test, and you'll see that hill and valley that I talked about from that.

Jeremy: Right. And then you also have the problem of downstream resources that maybe aren't as Serverless as Azure Functions would be, and having to deal with that problem as well, which is where maybe queues and some of those other things certainly could come into play.

Jeff: Yes. That tends to be the problem more than not. Folks ... I've been on more than a few engagements with customers where they're like, "We really want functions to scale like this." We validate, we make sure they can, and then they come to us a week later and they're like, "Okay, you were right, but it turns out our database behind the scenes did not handle this as well. So can I actually cap the scaling? I need to slow it down a little bit." And we're like, "Okay. Yeah, we'll work with you here too."

Jeremy: Can you do that though? Can you cap?

Jeff: We can.

Jeremy: You can do function concurrency type of thing?

Jeff: Yes. So you can set a maximum to say like, "Hey, I actually ..." You can't do in terms of scale slower. You can't tell us to scale slower, but you can tell us where to stop, and we'll stop at a certain point for you.

Jeremy: Perfect. All right. So the other thing you mentioned too is, you said there's an action versus a regular function or an ...

Jeff: Activity function?

Jeremy: Yeah, an activity function. I'm sorry. And so do all functions get an HTTP address? And then what's the ... what did you just call it? API Management was the name of the service there? So how does that all interrelate?

Jeff: Yeah, so every function does not get an address. Only functions that you say like, "I want this to be triggered through HTTP, or through a webhook," will get an address.

Jeremy: Okay.

Jeff: And then the other ones won't. So like an activity trigger doesn't, an Event Hub trigger, which is like Kinesis doesn't. But if you say like this is intended to be HTTP triggered, it's going to be an HTTP request, or yeah, an HTTP request, it will get an endpoint. And I mentioned, you could just use that endpoint. Like it's HTTPS, it's got a certificate, it's free. However, a lot of folks will actually go and add a layer on top of it, like you mentioned, API Management. They'll do that for a number of reasons. One is that, you might have 20 different functions, and having a single API Management where you can do authentication and monitoring, all those things in a single spot is great.

The other pattern that this is helpful with too though is, if you use something like API Management, you can just swap the implementation details of these APIs out really easily. So maybe you start with, "Hey, we have a big Python Flask app that's hosting our APIs." Maybe I start by hosting that in that web server offering I was telling you about. You front it with API Management, you expose all those APIs. And then little by little, you could actually break those APIs and turn them into functions.

Jeremy: Yeah, you'll start with that one. Yeah.

Jeff: And your users still call the same API. They're still using the same auth. It just so happens behind the scenes. You're becoming a little bit more efficient and you're moving to Serverless, but you're ... you have this layer of separation between your implementation and the actual surface of your API.

Jeremy: Yeah, no, that strangler pattern that is, it's like I tell everybody, don't try to shift everything over to serverless because it's just that it'll take you too long and you'll never do it. So just start breaking off those pieces that you can do that and having something like, again, API Gateway or API Management in Azure, having the ability to now just pick off those routes and send them to different places, I think is super important. But that actually brings up, I think, another topic that I'm curious to get your input on, is just this idea of hybrid applications in general.

In a perfect world and maybe my perfect world, maybe not everybody's perfect world, but things would be running almost entirely serverless. Meaning, that you weren't babysitting application servers, you weren't worrying about databases. You weren't trying to provision more things so that it will scale. But there are a lot of enterprises and a lot of businesses and people maybe don't even want to move fully to serverless that are going to continue to run these hybrid apps. So whether they're running a Kubernetes cluster for some container stuff where they've got a bunch of legacy servers running, or they've got a bunch of SQL Server 2000 still running somewhere in their data center. What's the approach to hybrid at Azure?

Jeff: Yeah, this is ... I would be interested to get your thoughts on this one too. I almost start by saying the majority of folks and the majority of organizations, I feel in this current time, it may be actual, I'll finish the answer by saying the only rub on this is where things might be headed. But like most folks, I don't know if you have a great enough excuse to do anything hybrid. If you're working at a small, mid, you're probably going to find a lot more benefits, not that you don't have, I'm sure you have great excuses and please tweet them at me, I'm fine to read them. But you'll probably find more benefits by moving fully to the cloud.

There are still a subset of folks. And historically, this is an area that Microsoft is really tried to invest heavily in with full on-prem stacks. I tend to sympathize with these folks a bit when there's like, "Hey, we're meeting with a big financial institution." And they're like, "We have legal requirements that all of the compute processing has to happen within this state or this country boundary. And you don't have a data center in this state or this country. So does that mean I can't use Azure Functions?" And we don't want to tell them no.

So there's the world where hybrid does still have merit, and even folks who have huge footprints with hybrid data centers that are almost to the strangler pattern it's the other way. It's like they can't just shut those off all at once. And so that is where we are investing in tools to help make that easier. There's a few different things like Azure called Azure Arc that lets you manage resources that are on-prem through the cloud. Azure Functions plays a role in that too. I mentioned you can take functions and you can run them anywhere on-prem or in the cloud.

So there's worlds where it happens. And if you're in a world where you're like, "I really want to use functions. All these things sound great. I like the programming model. I like how things become very purpose-built, but I might not be running in the fully managed service. Is there still value for me in functions if I'm not getting function pricing?" I think the answer is yes. And I think we have users who are in that mode today who are telling us, "No, there's still value in the serverless scale, the serverless resource utilization even if it's using my resources."

The only other aspect of this that I'm interested to see where it goes is, we have been in a few engagements where if you think about like IoT, Satya, our CEO says like, "Oftentimes with IoT, you want the compute to happen as close to the data as possible." And so I was in this engagement once with a sport's stadium. And they wanted to build a really smart sports stadium with thousands and thousands of sensors. And they wanted to be able to process millions of events every second in the stadium. And to them they're like, "Look, does it make sense for us to send all of these millions of events to the data center, pay for ingress and egress, have the processing happen there and then send the data back, adding potential latency and cost? Or does it make sense for us to actually have those functions running in the cracks of all of the compute that we already have in the stadium?"

And that is where there's a world where IoT might start to shift this as well, where maybe it will make sense in some of these worlds to have that function running closer to the actual data that it's processing. So it's early days in that. You could go look at a tutorial of how to run an Azure Function on an IoT device right now, but it's still super early, and I don't know where that's going to go.

Jeremy: Yeah. Now, I think the programming model for serverless is, or for at least functions as service, is a useful model regardless of whether you're running in the cloud or not. So I totally agree with you. Even if you are running on-prem, but your developers have that level of, I guess, abstraction where they don't have to think about, I have to deploy something to this server or I have to packages a container and then I've got to put it into a pot. I do all that stuff. If it's just, I'm writing a function that needs to react to some piece of logic, I think that makes a lot of sense.

In terms of, I think your stadium example's another great example, which is also probably why compute at the edge is another thing that makes a lot of sense. Maybe you don't need to be sending all that data back to Northern Virginia or Oregon or something like that, but you can instead just send it to the local cell phone tower that can do some processing. And then make a decision as terms of what data has to be synced back to some home run, or some other data centers, something like that. So yeah, I agree with you there. I think that is interesting, but I certainly would be against, just maybe from a conservation standpoint, setting up servers in your own data center just to run functions. I love that idea of running them in the cracks. That just seems to be a smarter move, I think.

Jeff: Yeah. Yeah. And I hope folks don't hear those kinds of scenarios and use them as an excuse to maybe do like, be honest with yourself. Look at those things. I'm not going to tell you everything belongs in the cloud, but by and large, to your point around efficiency costs even environmentalism, there are benefits of these economies of scale as well.

Jeremy: Yeah. And again, unless you are a massive sports stadium or a huge international bank, you probably can put your stuff in the cloud and it's going to save you a lot of money. It's going to save you a lot of headache. I managed the data center for a while many, many, many, many years ago. And it was, "No, thank you. I would never want to do that again." So, all right. So let me ask you this because we're running out of time here, but I'd love to just get your thoughts. You've been on the serverless train very, very early. I'm just curious, where are we going with this? I ask a lot of my guests this question like, what's next for serverless or what's the future of serverless? Where is serverless in five years? You can answer any one of those questions. But I'd love to get your thoughts on where this train is going?

Jeff: Branching from our last talk, the first thing that I feel more confident about moving forward, which was an open question that I've had for a few years is that, the value of functions in an application development pattern, regardless of necessarily cloud provider, just the value of that concept of event-driven compute, highly abstracted, highly productive, I think it's going to become more mainstream than it is. I still cringe a little bit whenever serverless trends in some Hacker News posts. And I opened the comments, knowing that there's going to be a lot of people who are skeptical and they're like, "Oh no, this is like the biggest vendor lock-in scam you've ever seen."

Jeremy: Haters gonna hate.

Jeff: Haters gonna hate. And I think that the value of that model will just grow. And even just the whole notion of, this is a useful pattern, in the same way you think about things like microservices or service-oriented architecture, it's like, "Oh yeah, this pattern has a lot of value." I think that functions being an essential part of that application, not the whole application, but an essential part is a big thing.

Jeff: I expect that we're going to continue to see over the next few years, a lot of innovation around state functions. Traditionally, whenever you're doing a functions overview, it's like functions are stateless and they're short-lived and all those things. I mentioned how we're doing some stuff here in Durable Functions. Cloudflare just recently entered this space with this thing called Durable Entities, I think is what they call it. I don't expect that to be the last of them. There's some startups like Temporal that are making a lot of noise. I would expect, and I don't know, but I would expect that Amazon and Google will continue to innovate either through workflows or other things to just make you managing state a little bit easier with functions. The other one, and I don't actually know where this will go, but I'm trying to keep an eye on it. It's this notion of, and I hesitate to use the word "containers" because I containers has been kind of inflated to mean too much.

Jeremy: Right. Yeah. Very overloaded term.

Jeff: So that there's a notion of could I take an existing application and make no changes to have it conform to the runtime APIs of Azure Functions or at AWS Lambda? Can I take an application that certain however I want and have it run in a serverless way? So some something similar to what Google Cloud run offers. And again, I don't think it necessarily has to be married to containers. I don't think containers is the only way to make that happen. But just making things a little bit more flexible. Making it so that ... And I don't know, we talked too about, there's value in you breaking things into functional pieces. So it's this balance of, hosting a monolith as a function is not where I want things to go, but somewhere in that, I just think we're going to continue to evolve of, I don't know, maybe there's, I don't know, I don't know. I'm thinking out loud with you here, but I think there's stuff in that space of, flexibility for your deployment, but while still getting many of the benefits of serverless.

Jeremy: Yeah. Yeah, now, I fully agree with that. I think the biggest change to serverless is the paradigm shift for how you build apps. And that's just something that, there's so much momentum for building, even absent just using containers, and again, a super overloaded term. But just using Kubernetes or any sort of Docker container type thing and building apps that way, that seems to be ... Has a lot of gravity right now. And so getting people to shift to the single-purpose function is probably not an easy thing to do. But yeah, that's an interesting thought. I thought very much so in that direction as well, like how do you just say to people like, let me take your existing app and I'll make it serverless for you without having to change the programming model that you're used to.

Jeff: Yeah, and I do think even when we're talking about composability, I still see a lot of architecture diagrams, which are well architected, that have so many different function dots all around the architecture. And one of the pain points that I always hear back is like, "Can you make it easier for me to deal with these tiny little pieces that are running everywhere?" Yeah, I get the benefits and I'm agile, but I do expect, I don't know if it's going to ... The serverless application model, whether it's SAM or Serverless or whatever Serverless Framework, I don't know what it is, but I just... There's still room that we've got to go in this space, in serverless to make it a little bit easier.

I think that's part of the appeal that people have with Kubernetes sometimes is they're like, "I have a cluster," and I think about a cluster. And in serverless then, it's like, well, hey, good news, you don't have to think about a cluster. The bad news, now you're dealing with 50 different pieces here, and you've got to version them all and deploy them all and monitor them all as a single app. So yeah, I'm curious to see what we can end up doing in this space and what different providers, whether they're cloud providers, or startups or whatever else do to try to make this a bit easier.

Jeremy: Yeah. Well, I appreciate what you're doing over there at Azure to try to solve those problems. And maybe one day we'll defeat Bowser and save the princess. Is that what it is? I'm sorry ...

Jeff: Look, I do think that there will be a world and as the numbers go down and down where it's like, I think technology will get to a spot looking at where cold start doesn't come up anymore when you talk about serverless. I don't know when that's going to happen, but it will happen.

Jeremy: Awesome. All right. Well, we'll leave it there. Jeff, thank you so much for spending the morning with me here. If people want to find out more about you, or contact you, or find out more about Azure and what's happening over there, how do they do that?

Jeff: Yeah, Twitter is the place that I am the most active and checking things. So @jeffhollan, just my first and last name. I'm fine if folks want to email me as well though. So jeff.hollan@microsoft.com. Shoot me an email. I've got a blog that I'll post some stuff. I was just thinking this weekend. I haven't posted for quite a while. But love blogging of like, "Hey, here's how to do in order processing," or "Here's how to do a stream error handling or whatever else." So hollan.io is my website. And then functions in general, if you're interested to learn more about Azure Functions, azure.com/functions. I think those are four places. Pick your own if you want to have a conversation.

Jeremy: Awesome. Well, I will put all that in the show notes. Thanks again, Jeff.

Jeff: Thanks so much, Jeremy. Great chatting with you.


Episode source