Hey, welcome back. Thought of delving more in Go, after this article: Using Golang to Build a Real-Time Notification System - A Step-by-Step Notification System Design Guide
This article delves into the intricacies of managing the garbage collector in Go, optimizing memory consumption within applications, and safeguarding against out-of-memory errors.
Stack and Heap in Go
I won't delve deeply into the inner workings of the garbage collector since a wealth of articles and official documentation already cover this subject. Nonetheless, I will introduce key concepts to illuminate the themes explored in this article.
In Go, data can be classified into two primary memory storages: the stack and the heap.
Generally, the stack houses data whose size and lifespan can be anticipated by the Go compiler. This encompasses local function variables, function arguments, return values, and more.
The stack is automatically managed, following the Last-In-First-Out (LIFO) principle. When a function is invoked, all associated data is placed atop the stack, and upon completion of the function, this data is removed. The stack operates efficiently, imposing minimal overhead on memory management. Retrieving and storing data on the stack transpires swiftly.
Nonetheless, not all program data can reside on the stack. Data that evolves dynamically during execution or requires access beyond the function's scope cannot be accommodated on the stack since the compiler cannot predict its usage. Such data finds its home in the heap.
In contrast to the stack, retrieving data from the heap and its management are more resource-intensive processes.
Stack vs. Heap Allocation
As alluded to earlier, the stack serves values of predictable size and lifespan. Examples of such values encompass local variables declared within a function, like basic data types (e.g., numbers and booleans), function arguments, and function return values if they no longer find reference after being returned from the function.
The Go compiler employs various nuances when deciding whether to allocate data on the stack or the heap. For instance, pre-allocated slices up to 64 KB in size are assigned to the stack, while slices exceeding 64 KB are designated to the heap. The same criterion applies to arrays: arrays surpassing 10 MB are dispatched to the heap.
To determine the allocation location of a specific variable, escape analysis can be employed. To do this, you can scrutinize your application by compiling it from the command line with the -gcflags=-m
flag:
go build -gcflags=-m main.go
When the following application, main.go
, is compiled with the -gcflags=-m
flag:
package main
func main() {
var arrayBefore10Mb [1310720]int
arrayBefore10Mb[0] = 1
var arrayAfter10Mb [1310721]int
arrayAfter10Mb[0] = 1
sliceBefore64 := make([]int, 8192)
sliceOver64 := make([]int, 8193)
sliceOver64[0] = sliceBefore64[0]
}
The results indicate that the arrayAfter10Mb
array was relocated to the heap due to its size exceeding 10 MB. In contrast, arrayBefore10Mb
remains on the stack. Additionally, sliceBefore64
does not migrate to the heap as its size is less than 64 KB, while sliceOver64
is stored in the heap.
To gain deeper insights into heap allocation, refer to the documentation.
Garbage Collector: Managing the Heap
One effective approach to dealing with the heap is to evade its usage. However, what can be done if data has already found its way into the heap?
In contrast to the stack, the heap boasts an unlimited size and continual growth. The heap is home to dynamically generated objects such as structs, slices, maps, and substantial memory blocks that cannot fit within the stack's constraints.
The garbage collector stands as the sole tool for recycling heap memory and preventing it from becoming entirely blocked.
Understanding the Garbage Collector
The garbage collector, often referred to as GC, is a dedicated system crafted to identify and free dynamically allocated memory.
Go employs a garbage collection algorithm rooted in tracing and the Mark and Sweep approach. During the marking phase, the garbage collector designates data actively utilized by the application as live heap. Subsequently, during the sweeping phase, the GC traverses unmarked memory, making it available for reuse.
Nevertheless, the garbage collector's operations come at a cost, consuming two vital system resources: CPU time and physical memory.
The memory within the garbage collector encompasses:
- Live heap memory (memory marked as "live" in the previous garbage collection cycle).
- New heap memory (heap memory yet to be analyzed by the garbage collector).
- Metadata storage, usually trivial compared to the first two entities.
CPU time consumption by the garbage collector hinges on its mode of operation. Certain garbage collector implementations, labeled as "stop-the-world," completely suspend program execution during garbage collection, leading to a waste of CPU time on non-productive tasks.
In Go's context, the garbage collector is not entirely "stop-the-world" and performs much of its work, including heap marking, in parallel with the application's execution. However, it does entail some restrictions and periodically halts the execution of active code during a cycle.
Made till now, let's move further.
Managing the Garbage Collector
Controlling the garbage collector in Go can be achieved through a specific parameter: the GOGC environment variable or its functional equivalent, SetGCPercent, found in the runtime/debug package.
The GOGC parameter dictates the percentage of new, unallocated heap memory concerning live memory at which garbage collection initiates.
By default, GOGC is set at 100, signifying that garbage collection triggers when the amount of new memory reaches 100% of the live heap memory.
Consider an example program and tracking changes in heap size via the go tool trace. We'll utilize Go version 1.20.1 to execute this program.
In this example, the performMemoryIntensiveTask
function consumes a substantial amount of memory allocated in the heap. This function sets in motion a worker pool with a queue size of NumWorker
and a number of tasks equivalent to NumTasks
.
package main
import (
"fmt"
"os"
"runtime/debug"
"runtime/trace"
"sync"
)
const (
NumWorkers = 4 // Number of workers.
NumTasks = 500 // Number of tasks.
MemoryIntense = 10000 // Size of memory-intensive task (number of elements).
)
func main() {
// Write to the trace file.
f, _ := os.Create("trace.out")
trace.Start(f)
defer trace.Stop()
// Set the target percentage for the garbage collector. Default is 100%.
debug.SetGCPercent(100)
// Task queue and result queue.
taskQueue := make(chan int, NumTasks)
resultQueue := make(chan int, NumTasks)
// Start workers.
var wg sync.WaitGroup
wg.Add(NumWorkers)
for i := 0; i < NumWorkers; i++ {
go worker(taskQueue, resultQueue, &wg)
}
// Send tasks to the queue.
for i := 0; i < NumTasks; i++ {
taskQueue <- i
}
close(taskQueue)
// Retrieve results from the queue.
go func() {
wg.Wait()
close(resultQueue)
}()
// Process the results.
for result := range resultQueue {
fmt.Println("Result:", result)
}
fmt.Println("Done!")
}
// Worker function.
func worker(tasks <-chan int, results chan<- int, wg *sync.WaitGroup) {
defer wg.Done()
for task := range tasks {
result := performMemoryIntensiveTask(task)
results <- result
}
}
// performMemoryIntensiveTask is a memory-intensive function.
func performMemoryIntensiveTask(task int) int {
// Create a large-sized slice.
data := make([]int, MemoryIntense)
for i := 0; i < MemoryIntense; i++ {
data[i] = i + task
}
// Imitation of latency.
time.Sleep(10 * time.Millisecond)
// Calculate the result.
result := 0
for eachValue := range data {
result += eachValue
}
return result
}
To trace the program's execution, the results are written to the trace.out
file:
// Writing to the trace file.
f, _ := os.Create("trace.out")
trace.Start(f)
defer trace.Stop()
By leveraging the go tool trace
, one can observe fluctuations in heap size and analyze the garbage collector's behavior within the program.
Note that the precise details and capabilities of the go tool trace
may vary across different Go versions, so consulting the official documentation for version-specific information is advisable.
The Default Value of GOGC
The GOGC parameter can be set via the debug.SetGCPercent
function in the runtime/debug
package. By default, GOGC is configured at 100%.
To run our program, use the following command:
go run main.go
After program execution, a trace.out
file will be generated. To analyze it, execute the following command:
go tool trace trace.out
With a GOGC value of 100, the garbage collector was triggered 16 times, consuming a total of 14 ms in our example.
Increasing GC Frequency
If we run the code after setting debug.SetGCPercent(10)
to 10%, the garbage collector is invoked more frequently. In this case, the garbage collector activates when the current heap size reaches 10% of the live heap size.
In other words, if the live heap size is 10 MB, the garbage collector will engage when the current heap's size reaches 1 MB.
With a GOGC value of 10, the garbage collector was invoked 38 times, and the total garbage collection time was 28 ms.
Decreasing GC Frequency
Running the same program with debug.SetGCPercent(1000)
at 1000% causes the garbage collector to trigger when the current heap size reaches 1000% of the live heap size.
In this case, the garbage collector is activated once, executing for 2 ms.
Disabling GC
You can also disable the garbage collector by setting GOGC=off or using debug.SetGCPercent(-1).
With GC turned off, the heap size in the application grows continuously until the program is executed.
Heap Memory Occupancy
In real memory allocation for the live heap, the process does not occur as periodically and predictably as seen in the trace.
The live heap can dynamically change with each garbage collection cycle, and under certain conditions, spikes in its absolute value can occur.
To simulate this scenario, running the program in a container with a memory limit can lead to out-of-memory (OOM) errors.
In this example, the program runs in a container with a memory limit of 10 MB for testing purposes. The Dockerfile description is as follows:
FROM golang:latest as builder
WORKDIR /src
COPY .
RUN go env -w GO111MODULE=on
RUN go mod vendor
RUN CGO_ENABLED=0 GOOS=linux go build -mod=vendor -a -installsuffix cgo -o app ./cmd/
FROM golang:latest
WORKDIR /root/
COPY --from=builder /src/app .
EXPOSE 8080
CMD ["./app"]
The Docker-compose description is:
version: '3'
services:
my-app:
build:
context: .
dockerfile: Dockerfile
ports:
- 8080:8080
deploy:
resources:
limits:
memory: 10M
Launching the container with debug.SetGCPercent(1000%)
leads to an OOM error:
docker-compose build
docker-compose up
The container crashes with an error code 137, indicating an out-of-memory situation.
Head back if you missed anything :)
Avoiding OOM Errors
Starting from Go version 1.19, Golang introduced "soft memory management" with the GOMEMLIMIT option. This feature uses the GOMEMLIMIT environment variable to set an overall memory limit that the Go runtime can use. For example, GOMEMLIMIT = 8MiB
, where 8 MB is the memory size.
This mechanism was designed to address the OOM issue. When GOMEMLIMIT is enabled, the garbage collector is invoked periodically to keep the heap size within a certain limit, avoiding memory overload.
Understanding the Cost
GOMEMLIMIT is a powerful but double-edged tool. It can lead to a situation known as a "death spiral." When the overall memory size approaches GOMEMLIMIT due to live heap growth or persistent goroutine leaks, the garbage collector is constantly invoked based on the limit.
Frequent garbage collector invocations can lead to increased CPU usage and decreased program performance. Unlike the OOM error, a death spiral is challenging to detect and fix.
GOMEMLIMIT doesn't provide a 100% guarantee that the memory limit will be strictly enforced, allowing memory utilization beyond the limit. It also sets a limit for CPU usage to prevent excessive resource consumption.
Where to Apply GOMEMLIMIT and GOGC
GOMEMLIMIT can be advantageous in various scenarios:
- Running an application in a memory-limited container, leaving 5-10% of available memory is a good practice.
- When dealing with resource-intensive code, real-time management of GOMEMLIMIT can be beneficial.
- When running an application as a script in a container, disabling the garbage collector but setting GOMEMLIMIT can enhance performance and prevent exceeding the container's resource limits.
Avoid using GOMEMLIMIT in the following situations:
- Refrain from defining memory constraints when your program is already near the memory constraints of its operating environment.
- Avoid implementing memory restrictions when deploying your program in an execution environment that you do not oversee, particularly if your program's memory consumption is directly related to its input data. This is particularly relevant for tools like command-line interfaces or desktop applications.
It is evident that by taking a deliberate approach, we can effectively control specific program settings, including the garbage collector and GOMEMLIMIT. Nonetheless, it is crucial to thoroughly assess the approach for implementing these settings.
Similar to this, I personally run a developer-led community on Slack, with no ads, I guarantee. Where we discuss these kinds of implementations, integrations, some truth bombs, weird chats, virtual meets, and everything that will help a developer remain sane ;) Afterall, too much knowledge can be dangerous too.
I'm inviting you to join our free community, take part in discussions, and share your freaking experience & expertise. You can fill out this form, and a Slack invite will ring your email in a few days. We have amazing folks from some of the great companies (Atlassian, Scaler, Cisco, IBM and more), and you wouldn't wanna miss interacting with them. Invite Form
And I would be highly obliged if you can share that form with your dev friends, who are givers.
Top comments (0)