This article is about my experience in the Huawei Cloud Practicum organized by Patika, sponsored by Huawei Cloud.
Table of Contents
Intro
I am Akın. I am developing myself in cloud computing. During this process, I wanted to add a hands-on experience to my learning process by participating in this practicum. In this blog post, I will describe to you what I did during this process.
This practicum was a 6-week process. To participate in this process, the given case was asked to be done as desired.
First Case
We were asked to show how communication is provided by establishing peering connections between the VPCs. Since the case is simple, I created a challenge for myself. Around this time, I was starting to learn Terraform. That's why I wanted to add a hands-on experience to the learning process and develop the infrastructure as code with Terraform.
My first job was to review the code samples on the web and in the terraform-provider-huaweicloud repository and understand the variables in Terraform. After a week, I completed the case. I brought together what I did during the process. You can reach my blog post and terraform codes from these links.
Homeworks
During the first two weeks, we completed five exercises in Koolabs. With these exercises, we started to experience and learn Huawei Cloud services with a hands-on approach. In the third week, we did an exercise aiming to use the CCE (Kubernetes) cluster basic features. With this exercise, we experienced the CCE service hands-on.
Final Project
In the final project, I converted my monolithic survey management system application, which I had previously developed, into serverless architecture and tried to adapt it to the cloud. You can find my project codes here.
- Creation of serverless functions
- Creating a pipeline with DevCloud's CloudPipeline
- Serverless backend with FunctionGraph
- Triggering functions with API Gateway
- Static website hosting with OBS
1. Creation of serverless functions
First, I created the internal application logic and serverless functions by separating the API handlers in the monolith app into events.
Structure of the project
survey-builder
┣ functions
┃ ┣ register
┃ ┃ ┗ main.go
┃ ┣ ...
┃ ┗ ...
┣ internal
┃ ┣ database
┃ ┣ go-runtime
┃ ┣ handler
┃ ┣ model
┃ ┗ service
┣ vendor
┣ go.mod
┗ go.sum
Below is a serverless function example of register event. I handled APIG Trigger events using Huawei Cloud's FunctionGraph SDK. I used GaussDB for NoSQL for the database.
package main
import (
"bytes"
"encoding/base64"
"encoding/json"
"net/http"
"huaweicloud.com/akinbe/survey-builder-app/internal/go-runtime/events/apig"
"huaweicloud.com/akinbe/survey-builder-app/internal/go-runtime/go-api/context"
"huaweicloud.com/akinbe/survey-builder-app/internal/go-runtime/pkg/runtime"
"huaweicloud.com/akinbe/survey-builder-app/internal/handler"
)
func RegisterHandler(payload []byte, ctx context.RuntimeContext) (interface{}, error) {
// Handle Apig Trigger Event Request
var apigEvent apig.APIGTriggerEvent
err := json.Unmarshal(payload, &apigEvent)
if err != nil {
apigResp := apig.APIGTriggerResponse{
Body: err.Error(),
Headers: map[string]string{
"content-type": "application/json",
},
StatusCode: http.StatusBadRequest,
}
return apigResp, nil
}
// Parse the 'data' value from the trigger event, which takes the 'data' value
// from the request sent by the client and carries it to the backend.
// Then, decode the base64 encoded data.
data, _ := base64.StdEncoding.DecodeString(apigEvent.PathParameters["data"])
data_str := bytes.NewBuffer(data).String()
// Event Logic
viewuser, status, err := handler.UserSignupPostHandler(data_str)
if err != nil {
apigResp := apig.APIGTriggerResponse{
Body: err.Error(),
Headers: map[string]string{
"content-type": "application/json",
},
StatusCode: status,
}
return apigResp, nil
} else {
user, _ := json.Marshal(viewuser)
user_str := string(user)
apigResp := apig.APIGTriggerResponse{
Body: user_str,
Headers: map[string]string{
"content-type": "application/json",
},
StatusCode: http.StatusOK,
}
return apigResp, nil
}
}
func main() {
runtime.Register(RegisterHandler)
}
2. Creating a pipeline with DevCloud's CloudPipeline
With DevCloud's Cloud Pipeline, I built the functions with docker and then transferred the images to SWR. With the container image option in FunctionGraph, I created the functions by pulling them from SWR. You can view the Dockerfile of the register event below.
FROM golang:1.19-alpine AS builder
RUN apk add --update --no-cache gcc git build-base
WORKDIR /src
COPY . /src
RUN CGO_ENABLED=0 go build -o /bin/register
FROM scratch
COPY --from=builder /bin/register /bin/register
ENTRYPOINT ["/bin/register"]
I tested the functions in FunctionGraph. However, I resorted to another method because I was constantly getting the below error and could not find a solution.
function invocation exception, error: CrashLoopBackOff: The application inside the container keeps crashing
I built and packaged the functions with the bash script below.
#! /bin/bash
for dir in functions/*/; do
# Extract the function name from the directory path
function_name=$(basename "$dir")
cd ${GOPATH}/src/huaweicloud.com/akinbe/survey-builder/functions/$function_name
package="${function_name}_go1.x.zip"
go build -o handler main.go
zip $package handler
done
With a Cloud Build task in DevCloud's Cloud Pipeline, I transferred the zip files to OBS (Object Storage Service). I created the functions in FunctionGraph by pulling the zip files from OBS. And then, I tested all functions successfully with APIG events.
3. Serverless backend with FunctionGraph
Below is the test case of the register function. It returned an HTTP status of 500 because there is currently no database connection.
4. Triggering serverless functions with API Gateway
In API Gateway, I created my APIs in an API Group where each endpoint corresponds to a function. I have configured the backend of the APIs with the functions in the Function Graph.
Below is the test of the register event at the /api/v1/signup endpoint.
5. OBS Static Web Site Hosting
I hosted my frontend files in OBS with static website hosting. However, I could not perform a full presentation since I received 405 not allowed responses from the POST requests sent from the client to the APIs.
Conclusion
As a result, I tried to implement a project with the Cloud first approach. I implemented a cloud-adapted project using the most appropriate services at every step. With its pros and cons, I tried to create an automated system in DevCloud's Cloud Pipeline, serverless architecture in FunctionGraph, and a gateway that serves backend logic to clients in API Gateway.
In this process, I had the chance to get to know and use Huawei Cloud services hands-on. My thanks to everyone who made this process possible.
See you in my next blog posts... 👋👋
Top comments (0)