DEV Community

Serverless Chats

Episode #7: Serverless Laravel using Vapor with Taylor Otwell

About Taylor Otwell:

Taylor Otwell is the creator of the Laravel framework, Laravel Forge, Envoyer.io, and Laravel Vapor. Before building Laravel, Taylor was an enterprise .NET and COBOL developer. He now works on Laravel and its ecosystem of tools full-time.

Transcript:

Jeremy: Hi, everyone. I'm Jeremy Daly, and you're listening to Serverless Chats. This week, I'm chatting with Taylor Otwell. Hi, Taylor. Thanks for joining me.

Taylor: Thanks for having me.

Jeremy: So you are the creator of the Laravel Framework, which is a very popular framework for PHP. So why don't you tell the listeners a little bit about yourself and what Laravel is and what it does?

Taylor: Sure. So I started programming professionally in 2008 after I graduated college, and I was originally a .NET developer in the enterprise world and started tinkering around with PHP on the side in 2010, and sort of in the fall or winter of 2010, I wrote my own PHP framework, sort of inspired by a variety of things: inspired by Rails, inspired by my experience with ASP.NET MVC, inspired by Sinatra and Flask and all these other frameworks, and sort of put something out there in the PHP world that sort of riffed on all of those ideas and brought them together in sort of a really productive way, I thought. And so I put it out there in 2011 and you can think of it as sort of Ruby on Rails for PHP, mainly. So it has, you know, controllers and routes and a database ORM and queued jobs and all kinds of other stuff to let you build web applications in PHP in a very productive way.

Jeremy: Awesome. So people are probably wondering why you're on a serverless podcast. But recently at Laracon, you just announced Laravel Vapor. So why don't you tell us about that?

Taylor: Yeah, so Vapor is something that I've been working on for about the last nine or 10 months, full-time, 40 hours a week. And it all started really over a year ago. I was just really inspired by sort of the serverless ecosystem, what people like Zeit were doing for JavaScript with their Zeit Now product and I really wanted something like that for PHP and something that could tell sort of the whole story for PHP, because there's a lot of moving parts that Laravel developers expect if they were to go on serverless like, you know, what do I do about my database migrations? What do I do about my queued jobs? And so I wanted to build a product around serverless that sort of made sense for Laravel developers and that they would understand, that would provide a really good experience for them.

Jeremy: So I want to jump into the details of Laravel Vapor. But let's start with some background on Laravel. So you said it was sort of this Ruby on Rails for PHP. So what types of applications do you see people building with Laravel now?

Taylor: Oh, gosh. I've seen everything from help desk, you know, accounting applications. I've seen, you know, all kinds of back-office applications, intranet applications. Of course, I've written Forge, a server management platform on Laravel. I have a zero downtime deployment platform on Laravel. So I've really seen such a variety - hotel room management platforms - almost anything you can think of really, I've seen on Laravel.

Jeremy: So is this something that anything you can build with the Laravel Framework now, you're going to be able to just do serverless-ly with Vapor?

Taylor: That's sort of the hope, you know, that your application will translate well and there's a few differences, you know, when you're operating in serverless, we can get into. But that was sort of the goal, though, is to make it so you can deploy on Vapor, and things sort of work as you would expect, and you could build your application as you're used to in a traditional server environment. You can just deploy it on vapor and sort of have the same experience. That was the end goal I was shooting for.

Jeremy: So does the development workflow change now that you're dealing with different types of resources?

Taylor: Sure, your local development workflow is a little different depending on what you choose to use. You know, most Laravel applications are used to interacting with something like MySQL. We also ship with Vapor, a sort of a docker container, where you can run all of your unit tests against the production PHP build that actually runs on a Lambda side of Vapor. So we try to provide some tooling to make that experience a little better.

Jeremy: Very cool. So you mentioned this interest in sort of the serverless ecosystem, but what were your main reasons for building Laravel Vapor?

Taylor: Because I don't ever want to think about servers ever again. Basically. So it goes back to sort of Laravel Forge where really I built and released Laravel Forge in 2014. Sort of the idea there was, you know, I was building Laravel applications professionally, and I was constantly configuring servers with nginx with PHP, with Redis or whatever I needed. You know, and I was building them on let's say, like, DigitalOcean or Linode or some VPS provider and I had written bash scripts to do all that and automate all that. And so I sort of built a platform around that called Laravel Forge. And you can, you know, link your DigitalOcean account, create a 2GB server, and it installs everything you need, and then you can deploy your application out there. And that's all great. Like that works really well for a lot of applications. But there's still like a lot of headache that comes with that, even though a lot is automated for you. For example, like my operating system goes out of date. I sort of kind of have to worry about SSL certificates renewing. I have to worry about various vulnerabilities. I have to worry about all kinds of stuff I just don't want to think about. And then if I'm load balancing those servers, now I've got, you know, 5-10 servers to worry about. All those same problems just multiplied. And so while Laravel Forge does provide a lot of automation and sort of the traditional server environment can be automated in some fashion. The idea of going totally serverless and just never ever thinking about servers at all, never thinking about, you know, how am I gonna load balance them. All of that sounded really, really appealing to me, after managing, basically, thousands of servers with Forge so that was really appealing. And that's what really drew me to the whole ecosystem.

Jeremy: So was there something that prompted this? I mean, at what point did you look at AWS or Microsoft Azure and say, "Okay, now I can go serverless with this." Because PHP wasn't even supported as a runtime until November of last year when they came out with with custom runtimes for Lambda.

Taylor: Yeah. So there were some key things that happened. I remember the first most important thing was we could get Laravel up and running with a Node shim where when a HTTP request comes in, we use Node to sort of invoke PHP. And people were doing this with other languages too, you know, before custom run times. But always the big problem for me was how are we going to  hook into the Laravel queue system in a very nice way because there was not an official integration with SQS and Lambda at the time. But last year, when AWS announced that official integration with SQS, that was sort of one of the last puzzle pieces that really clicked into place to where I was like, hey, I think I could actually build a pretty nice platform for Laravel on Lambda and things would pretty much work how people would expect them to, because I could set up that SQL event source mapping for them — or SQS event source mapping for them and sort of take care of making Laravel's queues work as you would expect. And then the custom runtimes were sort of a cherry on top. We had everything running before that came out with the Node shim approach. But once that came out, we got a huge performance boost by just shipping a custom PHP runtime with a PHP-FPM. For example, I think just like a "Hello World" request on the Node shim in Laravel is something like 30 to 40 milliseconds on the Lambda side, and then once we move to the custom runtime with PHP and FPM and all that, now it's like six or seven milliseconds on the Lambda side for a "Hello World" request, so a huge performance increase being able to ship that. So those two things were really, really key things that shipped last year that made this all possible.

Jeremy: So the compute obviously runs on Lambda because you're using custom runtimes. Are there any limitations, because it only runs for 15 minutes, right? And so if you're doing Laravel queue jobs that might be running longer than that, are there's some workarounds or are those just sort of some of the limitations that are built into the system?

Taylor: I think it's just one of the limitations you have to deal with. So, I mean, I guess you have a couple options. If you can somehow chunk that work up into multiple queue jobs or, you know, if you could somehow find some way to sort of judo the whole problem on your end, that works well, otherwise, you know, I just may not be a solution that works for you. For me, personally, I don't really have any apps that have queue jobs that take longer than 15 minutes, so it hasn't been something I've really messed with. But, you know, it is a sort of a stated limitation in the documentation of Vapor, something you have to either live with or work around.

Jeremy: And I would think that for most web-serving type applications, it wouldn't matter anyways.

Taylor: Typically not.

Jeremy: So the other thing was file uploads, right? So you're using S3 now to store files, so that's a little bit different than uploading files to a Laravel server, for example?

Taylor: Yeah, sure. Yeah, in the old school days, people would probably just have some form on the front and that posted straight to their PHP backend. And then there's the, you know, the global $_files array in PHP that they would interact with. But yeah, on the Vapor side, we really encourage people to just send their files directly to S3 from their client side using JavaScript and I actually built an NPM package that tries to make that a little easier because there's a few steps involved to doing that where we need to generate a pre-signed URL to S3 and then get that back to the client and then send the file using that URL in headers. So I wrote a little NPM package that has a Vapor.Store method where you can pass it a file and it will call a backend route to the Lambda to get the pre-signed URL. Once it gets that back, then it can just send the file directly to S3. And once that's done, then you can kind of ping your backend and say, "Hey, the files uploaded," and we can sort of do whatever we want. From that point, we can manipulate the file or whatever you want to do. So that's another, I would say that and the time limit are sort of the two main differences for sort of the development workflow that developers are gonna have to get used to. I think even if you're a traditional server environment and you're using something like, let's say, Laravel Forge to have a DigitalOcean server, streaming straight to S3 from the frontend is sort of still like a good idea, I think, and not sending big files to your PHP server. So it's already sort of, I would say, a good practice that I would recommend. But with Vapor, it's really more of a requirement that you have to start working that way.

Jeremy: All right, so let's get into the nuts and bolts of Vapor here, because this is a really cool service. And this is a hosted service, right? You host this for your users?

Taylor: Right. This is a hosted service, which means we can control things like team members, permissions, keep a record of deployment history, all of that.

Jeremy: But all of the resources are in the customer's AWS account?

Taylor: Yes, exactly. When you sign up for vapor, the first step after you sign up is to link your own AWS account or even multiple AWS accounts if you want to, so that we can create things on your account.

Jeremy: All right, so let's talk about some of these resources. So let's start with databases. What kind of databases or what can you do with databases in Vapor?

Taylor: So you've got a couple options. You can do just your traditional RDS server with a fixed-size small database instance or a development instance, all the way up to sort of the memory-optimized instances as well. And then you could also do an Aurora Serverless database, which is also MySQL, even though they've recently announced the Postgres serverless support. But right now, we've got those two options for databases. So you can pick one and then just link it to your application in your Vapor.yaml configuration file and deploy, and you're sort of good to go, and Vapor takes care of injecting all the environment variables that Laravel needs to connect to that database. So you don't really have to worry about, you know, how you're going to get your database host, your database to username and password, all that into your Laravel execution environment. Vapor injects all of that for you.

Jeremy: So if you're using databases with Lambda, then your Lambda has to be in a VPC. So what about all the complexity around VPCs and NAT gateways and that kind of stuff?

Taylor: So as soon as you create a project on Vapor, we create, or we ensure, that a VPC in that region exists. And then if we see you deploying a database, like a serverless database, we're going to ensure that that VPC has a NAT gateway attached to it. We're going to ensure that all the subnets and security groups and stuff look okay, and we're gonna make sure that the database has the right subnet group as well when we create it. We try to automate all that. I would say that's one of the more complex pieces of getting an application up and running on Lambda and doing all this. So we try to make that as smooth as possible and keep it from getting out of hand. But it does intelligently take care of that for you. So you don't usually have to think about it when you're using Vapor.

Jeremy: And you can control the database, right? So you have this UI that shows metrics and things like that?

Taylor: Yeah, all of the stuff, or a lot of the stuff that RDS would let you do, we sort of provide a UI on top of it. So you can restore the database at a certain point in time, you can scale the database, if it's a fixed-size instance, and you can get kind of cool metrics like your max connections, which is pretty important in the serverless world, your average connections, how much CPU you're using, how much free disk space you're using If you're using a server that has a fixed disk size. And you can sort of monitor that, as well as configure alarms for that straight from the Vapor UI, which ties into CloudWatch alarms. So you can set an alarm if my max connections is more than 100 for five minutes, then I want you to email me, or ping me on Slack or whatever.

Jeremy: Cool. So what about queues? Because SQS on Amazon is really one of the most scalable services that they've got. So how do queues work?

Taylor: Yeah, I love SQS. And I've used SQS in production even before this. And so what happens is when you deploy a given project in a given environment. So say I'm deploying my Laracon project for the Laracon website in the production environment, we ensure that a queue exists and the name of the queue is sort of conventional where it's like the project name-environment, or whatever, and we ensure that exists. And then we set up the event source mapping between SQS and Lambda so that when a queue job comes into SQS, it invokes our Lambda. And then we intercept that on the Lambda side and we say, "Oh, this is ah queue job," and we send it to the Laravel queue worker for you and so on and so forth. So it feels really transparent. There's really no configuration for queues in Vapor at all. You just deploy and dispatch your jobs, and it works just like normal at literally zero configuration in your YAML configuration file.

Jeremy: And what about caches? Because that's a huge part, obviously.

Taylor: Yeah. So for caches we built a UI obviously on top of Elasticache to create Redis Clusters. I didn't add memcached right now, but we may visit that later. So you can create an Elasticache Redis Cluster, and you can scale it up to however many nodes you want and pick your, obviously, the size of your nodes. And it's sort of the same story as with databases. You attach that to your project environment, and then we inject all of the necessary environment variables so that Laravel can actually connect to that Redis Cluster. And, you know, so things like the hosts, we set your cache driver in Laravel to Redis because it can be other things. And everything sort of again just works, and similarly to databases, you can get some kind of cool metrics, I think, where you can see your cache hits, your cache misses, your hit rate percentage, and then also the CPU utilization across all of your nodes individually. So a pretty cool little UI on top of that that tries to make it as easy as possible, and the same set up as databases where we sort of get the VPC set up correctly and all of that.

Jeremy: And what about for local development? Because that's always sort of a tough thing in serverless right now. And so you can't directly connect to a database or cache in a VPC. You have to use like an SSH tunnel or a VPN. So you take care of all that for us, right?

Taylor: Yeah. So the approach I took there was, for both cache and databases, is I let you create what we call in Vapor a "jumpbox," but I think other people call that outside of Vapor as well. But basically, it's a small T2 nano that we put in your VPC, when you want it, and it just takes a minute to provision. But what we can do is since that's inside your VPC, we can do interesting things like, I built a Vapor cache tunnel CLI command where, when you run Vapor cache tunnel and then give it the name of the cache you want to tunnel into, it opens an SSH tunnel through that jumpbox and then opens port 6378 as sort of a port into your Elasticache cluster. So that means that locally, like here on my iMac, I can open up my Medis GUI for Redis and then connect to port 6378 localhost, and I'm connected to my Elasticache clusters through that SSH tunnel. So that makes it really easy to sort of, especially during development or in like my staging environment, I want to see what's happening in the Redis Cluster. I can see what keys, are there, blah blah blah, and same way with database. I can use that jumpbox as an SSH in my like TablePlus, if you have a database management GUI on your local machine or whatever you want, you can connect over SSH through that box to your database so that you can connect to like your Aurora Serverless database in a nice UI so really actually pretty handy. And then if you want to, when you're done inspecting it or whatever, you could just delete that jumpbox in Vapor and get rid of it. And so they're so fast to provision. Sometimes I just make them when I need them, and then get rid of them later.

Jeremy: Yeah, and I think a T2 instance costs, like, nine bucks a month or something like that.

Taylor: Yeah. Very cheap.

Jeremy: Yeah, and and so that that tunneling technique. So I actually use that pretty much all the time for most of the workflows I have because if you need to connect to Elasticsearch or a database or anything like that, it's just so much easier than setting up a box with the VPN and having to manage all that stuff.

Taylor: Right. And I think the things that I think something that led me to some of those features is that was really nice. Is the whole time I'm building Vapor? I'm sort of deploying Vapor out on to Lambda, so I sort of had this dog fooding my own product on Lambda that's helping me discover sort of those kind of pain points and help me sort of flesh out the product really.

Jeremy: So we talked about S3, and how you're sort of managing all of the file storage using that. But you also do a CDN as well, right?

Taylor: Yes. So when you deploy a project, this is sort of the nice thing about managing Laravel and managing Vapor at the same time as I can make all these nice assumptions about how things work. So when you deploy your project, we extract all the assets out of your public directory, which is where Laravel projects keep like things like their style sheets, their javascript and all that. We upload that to an S3 bucket, which has CloudFront in front of it. So we configure a CloudFront distribution for you that points to that S3 bucket. And then once you're on the Lambda side we inject an environment variable called asset_url and Laravel knows to look for that when generating asset URLs, so that when you generate, for example, you're link  to your stylesheet or your link to your script tag to your JavaScript, it automatically has that CloudFront URL in front of the file name. So that makes it really nice to automatically get all those assets on CloudFront. Because that would be kind of a chore to sort of do manually. And we tried to make that as smooth as possible.

Jeremy: And you create all the buckets and do all that stuff?

Taylor: Yep. On deploy we make sure all of that exists.

Jeremy: Okay, so what about metrics? You mentioned that you could get some database metrics and things like that, but what about, like, overall metrics or alerting? What does Vapor do for you with that stuff?

Taylor: Yeah, so on the web and queue side we do metrics like total invocations. And of course, you can set, like, over the last 30 minutes, the last 24 hours, the last seven days, whatever different time periods. So we let you look at http invocations, queue invocations, because those are two separate Lambdas when you deploy your project we actually have a separate Lambda for web stuff and a separate lambda for queue and CLI stuff, mainly because that lets you manage the concurrency limits and memory limits separately for those two environments because I think it typically makes sense sometimes for those to be different configuration values. So you can monitor that, and you can also monitor the average duration of both the web and the queue/cli side. And then you can set up alarms on that stuff. Like if my average duration has spiked over some value for a given number of minutes, I want you to email me or whatever.  So pretty useful metrics to monitor. And then we also have kind of a slim UI on top of CloudWatch Logs just for logs in general. Where if I visit the logs tab in Vapor, I can see the latest logs for the past hour for both my web and then a separate tab for the CLI and queue side. And that's kind of nice, because if you go out to CloudWatch, you know you're digging through multiple different log streams and stuff, and it's pretty nasty. So we try to interleave all that.

Jeremy: So you had mentioned earlier one of the itches or one of your own itches that you were trying to scratch was things like certificate renewal and some of that higher level stuff. So, DNS and certificate management that's all built in and managed for you. Correct?

Taylor: Yeah, I sort of bake that into one screen where you can actually purchase domains straight from the Vapor screen. Or if you already own the domain, you could just add it to Vapor. And it picks up on all the records that are already in Route 53. But then you can request the certificate straight from that screen, and what it does is actually request a wildcard certificate for the domain using DNS validation. Or you could do email validation, but we really strongly recommend DNS validation within Vapor. And that sets up the proper CNAME Records for the DNS validation to work and prove you own the domain and all that. And then once that is issued, of course, we start using it when you deploy. We actually require every application that's deployed to have a valid certificate. There's no way to deploy a non-SSL application on Vapor. So we let you do all that and take care of the renewals or really Amazon's taking care of the renewals for you on the certificate side, because we're using the certificate manager right there on Amazon.

Jeremy: So the other thing that you typically do is you'd have, like, a DEV stage, a STAGING stage and a PRODUCTION stage. And that's sort of a typical serverless way that these things are done. But you actually don't need to worry about domains because you have his vanity URLs.

Taylor: Yeah, that's one of my favorite features, too. So I use Cloudflare and actually just purchased a bunch of domains like vapor-farm-1, vapor-farm-2. So I own a lot of these domains. And so what that means is, when you deploy, like, for example, like you said, your staging environment, we assign each environment its own vanity URL. It's kind of like a Heroku-style URL, it's like, gorgeous-mountain-scape-124.vapor.build or whatever. And I add that DNS record that CNAME record to my Cloudflare account, actually, because I own all those vanity domains and point that to your serverless application and and then we add our own Vapor vanity certificate into we import it into the certificate manager so that it all works. So that's actually really handy, because one of them pain points if you're deploying PHP to Lambda right now, it's those... by default, those API gateways have, like the slash stage suffix on them, which can just kind of wreak havoc with various PHP frameworks. They don't know what to make of that extra segment in the path, and so having that clean vanity URL is actually a really nice way when you're getting started to access the application.

Jeremy: All right, so develop locally, and then how do you actually get all of that code onto AWS?

Taylor: Yeah. So when you run the "vapor deploy" command on your command line, we build the whole project with build steps that you can specify, like installing your Composer dependencies. You're running some in NPM stuff, and then once that's done, we actually zip up the whole application and send it to S3, and then we ping the vapor backend that says "hey, this deployment ready to go." Here's where the code artifact lives on S3 and then from there the Vapor backend updates the function configuration, updates the function code to point to the new S3 code we have. We use function aliases and Lambda. So at the very last point we switch, the production or the staging or the testing alias to the new version of the Lambda.

Jeremy: And what about if you're doing CI/CD?

Taylor: So if you're doing that, we actually you can ship the Vapor CLI, it's just a single compiled binary with your code. And so, like, if I'm doing let's say, CodeShip, for example, within my CodeShip build steps, I run my tests, blah blah blah. Then my deploy step, I could just use that Vapor CLI and I get my credentials in there through an environment variable so on my CI server, whatever, I would configure my Vapor API token as an environment variable. And then I could run "vapor deploy" straight from the CI service, and it would use that environment variable to authenticate with vapor and deploy my project

Jeremy: And so you have a CLI tool as well as the full on web interface. Correct?

Taylor: Yes. Yeah, and everything you can do on the web interface you can almost everything else so you could do on the CLI interface. You can't do things like change your password or update your billing plan, but you can create databases you can create caches, you can do all that stuff.

Jeremy: And speaking about the billing plans. So this is just SaaS, like monthly type thing?

Taylor: Yeah, just a monthly SaaS. So right now, I've got a price that I think a launch price will probably like $29 a month and a full price will be like $39. Which is the same price as the PRO level of Laravel Forge. And of course, that's unlimited teams, deployments, projects, whatever.

Jeremy: All right, so that would be one part of the building. But then all of the resources in the customer's AWS account, they would be responsible for the costs of those as well. And then, how much control do customers have? Can they go into their AWS account and actually manipulate some of these resources and tweak them if they want to?

Taylor: Yeah, they could tweak things on their own. We try to like, even if you change things in Route 53 we import those records every so often. Of course you wouldn't want to go too far off the beaten path so that Vapor gets like, confused about the state of things. But if they ever wanted to walk away from Vapor, that is kind of one of the nice thing is, all that stuff is in their AWS account, so they could just kind of walk away and build their own deployment process around Lambda. And I've always liked that approach with Forge. It's simpler for us because we don't have to worry about all that billing on our side. And I think it's just nicer for the end user because we don't have to mark up AWS prices for anything. You still own everything in your own accounts, so I think it's just sort of a nice, clean separation.

Jeremy: Yeah, and there's other things you could add as well. Like if you wanted to use SageMaker or Amazon Comprehend, you could use the AWS SDK and you could integrate with those things. And then that would all be managed in the same account. So, that's very cool. Okay, great. So, let's talk about where you see the future of Laravel going. So, is it serverless?

Taylor: I think that is definitely a big part of the future. And I think the serverless philosophy and Laravel philosophy are very similar. From the very beginning. So when I launched Laravel, the idea was that you could just focus on your code and Laravel handled all of this sort of nasty stuff, like authentication and session management, all of that for you. And I think a lot of the route philosophy of serverless is very similar, where the goal is where you could just focus on providing value and writing the logic that makes sense for your business. And in that way, I think, Laravel's philosophy and the serverless philosophy are very aligned, and they're sort of a nice fit together. And so I hope the future of Laravel is tied in with serverless. And I'm trying to sort of be ahead of the trends here with this, you know, and try to be the first kind of major php platform for serverless out there that sort of tells the whole story from databases to queues to mail to assets and all that. Um, so, yeah, I'm excited about it, mainly because I just believe their philosophies are so similar.

Jeremy: Yeah, and one of the great things about serverless, obviously is the massive scalability of it. And Laravel forges is a scalable product as well. But you're still doing a lot of that manually, right?

Taylor: Yeah, sure. It could be scalable if you, you know, build a load balancer and 10 servers or whatever. But now I'm managing 10 servers that would have to really manage. And so yeah, sure, can you build that kind of scalable platform on a traditional server environment? Yeah, but it just is a lot more headache, I think. And I would rather just do "vapor deploy" and be done with it.

Jeremy: Yeah, no, that totally makes sense. So let me ask you about serverless in general. So what are your thoughts about serverless being sort of the next evolution or the future of Cloud computing? Because obviously there are a lot of people using serverless now, but I still feel like you say to somebody "serverless" and they're like, "Ah, what's serverless?" But this idea of moving up the stack and focusing on your code and getting rid of all of that undifferentiated heavy lifting, I mean, what does the next five years of serverless look like?

Taylor: Yeah, I think the next five years will be huge for serverless, I really do. I think it is the future, because what's the alternative, really? Like more complexity, more configuration files, more weird container orchestration stuff? I don't really think that's the future, you know, that people are gonna naturally gravitate towards. I think people want simpler things. And I think at the end of the day, serverless is simpler. It's going to only get more simple as the tooling gets better, as the platforms get better. And to me, it's the real endgame, you know, of the whole server thing, just deploy your code and you focus on your code and let the provider focus on the infrastructure.

Jeremy: Yeah, totally agree. So, listen, you're obviously doing your part here, and anyone in the PHP community, I think this is just a huge step forward for them, and a big vote of confidence for serverless and the serverless model, and obviously, what you can do with it. I totally appreciate what you're doing, so thank you so much for that and, obviously, for coming on. So if people want to find out more about you, or Laravel or more about Laravel Vapor. How do they do that?

Taylor: Yep. So you can follow Laravel on Twitter @laravelphp. You can follow me personally on Twitter @taylorotwell, or you can email me taylor@laravel.com.

Jeremy: And if you want to sign up for Vapor, where do you go?

Taylor: vapor.laravel.com. That sounds a good thing to add.

Jeremy: You probably want to mention that. All right, Taylor, thank you so much. It was great.

Taylor: All right. Thanks for having me.

Episode source