Ok. I instrumented the code, compiled it, explored, so I know, more or less, what is inside. It is time now to deploy the app on the target environment.
Deployment
I use this command
cloudcc up --work-dir compiledAWS/ --stackname klothofirstrun
The meaning is simple. up
is the parameter for deployment, --work-dir
is the location of compiled project and --stackname
is the name of the application (given by me during compile step).
Ok, big moment. I hit enter!
Error
Well, I've got an error during the first run. After short investigation it was clear. Something unexpected for me:
'FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
So, I changed the size of my EC2 quickly and run deployment again.
Well, another error, this time:
aws:cloudwatch:EventRuleEventSubscription (qrender-2de26_scheduledCleanup_act):
error: Duplicate resource URN 'urn:pulumi:klothofirstrun::klothofirstrun::aws:cloudwatch:EventRuleEventSubscription::qrender-2de26_scheduledCleanup_act'; try giving it a unique name
What is quite strange, as nothing was done previously.
Ok, I tried to destroy the stack (success) and recreate it. Still nothing, the same error.
Right... Hm, I thought... I just started, right? Let's do it 'in MS Windows style' - restart :) I removed my compiledAWS
and dist
directories, compiled project again, and deployed a newly compiled stack. And...
Inspecting the output, I see multiple IAM roles and policies. That is perfect, as it is a good move towards a least-privilege principle. CloudWatch Log Groups will be also created for each resource. API Gateway, Lambdas and DynamoDB also are there. What is surprising at this moment, I see SQS, which is not included in the diagram I presented previously. I am also worried, that only one ECR will be created. Will see how it will look.
I didn't spent time troubleshooting my previous issue. Looks like Klotho failed due to not enough memory, and wasn't able to recover from it. Something to think of by developers :) I understand it as 'issue' more related to Pulumi, though.
Well, deployment failed. The issue is simple, after restarting the machine (do you remember? I just resized my EC2), I forgot to start Docker. It will be very good, if all dependencies could be checked by Klotho before actual deployment process start.
I started Docker, and reapplied the project.
And... It is funny :D Error again, this time the cause is different - no space on device :) Looks like the default 8G is not enough. No worries, I'll apply the improvement.
I resized EBS and executed
lsblk
growpart /dev/nvme0n1 1
lsblk
xfs_growfs -d /
df -h
And I am back. And deploy again :D
This time I found a more serious error. I used custom directory during compilation. And I was punished...
Error exporting Error: ENOENT: no such file or directory, open '/home/ec2-user/tutorial-starters/webapi/compiled/klothofirstrun-spec.json'
at Object.openSync (fs.js:458:3)
at Object.writeFileSync (fs.js:1283:35)
at /usr/local/bin/out/cloudcc.js:242515:8
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:97:5) {
errno: -2,
syscall: 'open',
code: 'ENOENT',
path: '/home/ec2-user/tutorial-starters/webapi/compiled/klothofirstrun-spec.json'
This is something definitely to be checked by creators.
Ok, stack destoyed again, directory cleared (just in case) and I compiled and started deployment again, but this time I used the default directory - compiled
.
And success, finally!
I received an URL to API in output, so I am ready to test it!
I use the examples from Klotho documentation, of course.
curl https://uq5xlogu5k.execute-api.us-east-1.amazonaws.com/stage/v1/users
["Zendaya","Remi","Nadira"]
So, it works! But exactly as I expected. The cold start is enormously big.
Going around
Let's go through the actual infrastructure.
First point, ECR (Elastic Container Registry). There is only one ECR created and all images are stored there with custom tags. I do not like it. I know why is that, of course. In the idea of Klotho's creators (I suppose), no one will look there. But I do :D
I'd like to strongly recommend to enable vulnerability scans as default. And I prefer to see best-practice approach with ECR per image and proper tagging. The second things is minor, though.
I check CloudWatch next. Very valuable can be if there will be possibility to enable X-Ray and Insights. CloudWatch Logs have retention set to 1 day by default. Very nice.
I went here to show the cold start. Take a look on the screen below, please.
It is visible, that coldstart took 8 seconds and the actual execution more than 2 seconds. When Lambda was warm, all was closed in 9 miliseconds. The difference is enormous and is caused by running Docker on Lambda. This is something what cannot be improved on Klotho side (only one thing, maybe, go with NodeJS directly to Lambda, without container). I will not go into coldstart issue here. Those of you who works with Serverless know it too well :)
Lambda services look good.
Let's take a look on API Gateway.
The name app
is not perfect :) I have only one stage, what maybe should be improved to multistage setup, but it is not a big deal, Klotho manages different versions through path, so, it seems to be ok. Throttling is enabled (nice), but anything related to logs. I'd like to see improvements here.
SQS is created. I didn't see it on the diagram, but it is there :) Configuration looks pretty standard. In fact, all functions have permissions to work with this SQS, but I didn't check the code what is done with the queue.
DynamoDB is created in provisioned capacity mode, with no backups configuration.
IAM policies are very nicely prepared. Configuration is narrowed as much as possible, permissions and resources are limited to what is needed. I really like it.
S3 looks pretty standard, with one exception. I'd prefer to see block all public access
set to enabled.
Final action
On the end of my playground I deleted the stack. The proces is quick (definitely faster than compilation and deployment), and all resources were removed as expected.
cloudcc destroy --work-dir compiled --stackname klothofirstrun
Finally, I cleared Pulumi, to remove history and associated configuration data.
pulumi stack rm klothofirstrun
It is time for the summary of my basic experience with Klotho.
Top comments (0)