DEV Community

Maksim Gritchin
Maksim Gritchin

Posted on • Updated on

Memory Allocations in Rust

Introduction

Welcome to this in-depth tutorial on memory allocations in Rust. As a developer, understanding how Rust manages memory is crucial for writing efficient and safe programs. This guide is the result of analysing several expert sources to provide you with a comprehensive overview of memory management, not just in Rust, but in programming languages in general.

It's important to note that some concepts discussed here can be quite complex and may require further research on your part to fully grasp. Don't be discouraged if you find certain topics challenging – memory management is a deep subject, and even experienced developers continually learn new aspects of it.

We'll start with basic concepts that apply to many programming languages and then focus on Rust-specific implementations. By the end of this tutorial, you'll have a solid foundation in Rust's memory allocation strategies and how to implement them effectively in your projects.

Memory Layout Basics

Before we delve into Rust-specific concepts, it's essential to understand the basic memory layout of a program.

Executable Binary Structure

When you compile a Rust program, the result is an executable binary. The operating system kernel provides a continuous range of virtual memory addresses mapped to physical memory addresses for your program to use.

ELF Executable Structure

When discussing the structure of executables, it's crucial to distinguish between the file format on disk and the memory layout during runtime. Let's focus on the Executable and Linkable Format (ELF), commonly used in Linux systems.

ELF File Structure

An ELF file consists of several parts:

  1. ELF Header:
    • Contains metadata about the file type, target architecture, entry point, etc.
  2. Program Header Table:
    • Describes how to create a process/memory image for runtime execution.
    • Defines segments for the loader.
  3. Section Header Table:
    • Describes the sections of the file in detail.
    • More relevant for linking and debugging.
  4. Sections:
    • Contain the actual data and code. Common sections include:
      • .text: Executable code
      • .data: Initialized data
      • .rodata: Read-only data
      • .bss: Uninitialized data (doesn't actually take space in the file)

Key Points

  • Sections vs Segments:
    • Sections are used by the linker and for debugging.
    • Segments are used by the loader to create the process image.
  • Read-Only Nature:
    • The executable file itself is typically read-only.
    • Writable sections are loaded into writable memory at runtime.
  • Runtime vs File Structure:
    • The file structure (sections) differs from the runtime memory layout.
    • The loader uses the program header to set up the runtime memory.

Runtime Memory Layout

When the program is loaded:

  1. The loader reads the program header.
  2. It sets up the process memory according to the defined segments.
  3. This includes setting up the stack and initialising the heap.

It's important to note that stack and heap are runtime concepts and are not present in the executable file itself. They are allocated and managed by the operating system when the program runs. More read Binary Executable

Stack vs Heap

Now, let's focus on the two primary types of memory allocation in Rust: stack and heap.

Stack

The stack is a region of memory that follows a Last-In-First-Out (LIFO) order. It's used for:

  • Local variables
  • Function parameters
  • Return addresses

Key characteristics of stack allocation:

  • Fast allocation and deallocation
  • Limited in size (typically 8MB on 64-bit Linux systems for the main thread, 2MB for other threads)
  • Non-fragmented

Heap

The heap is a region of memory used for dynamic allocation. It's managed by Rust's global allocator trait, which often uses the C library's malloc under the hood. Key characteristics include:

  • Flexible size
  • Slower allocation and deallocation compared to the stack
  • Can lead to fragmentation
  • Shared among all threads

Function Stack Frames

When a function is called, a new stack frame is created. This frame stores:

  • Function parameters
  • Local variables
  • Return address

The stack pointer keeps track of the top of the stack, changing as functions are called and return.

Rust's Approach to Memory Management

Rust's memory management is built on two key concepts: ownership and borrowing. These rules allow Rust to manage memory without a garbage collector, ensuring memory safety and preventing common issues like null or dangling pointers.

Ownership Rules

General Ownership Rules

  1. Single Owner Principle:
    • At any given time, a value in memory is typically owned by a single binding.
    • When an owner goes out of scope, Rust automatically deallocates the value.
  2. Move Semantics:
    • Assigning a value to another variable typically moves ownership.
    • After a move, the original binding can no longer be used.
  3. Borrowing:
    • Values can be borrowed without transferring ownership.
    • Multiple immutable borrows or one mutable borrow are allowed at a time.

Let's look at an example:

fn main() {
    let s1 = String::from("hello");
    let s2 = s1;  // ownership of the string moves to s2

    // println!("{}", s1);  // This would cause a compile-time error
    println!("{}", s2);  // This is fine
}
Enter fullscreen mode Exit fullscreen mode

In this example, s1 initially owns the String. When we assign s1 to s2, the ownership is moved, and s1 is no longer valid.

Exceptions and Special Cases

  1. Constants: const N: u32 = 5; Constants do not have an owner in the traditional sense. They are compile-time constructs, inlined where used.
  2. Statics: static N: u32 = 5; Static items have a fixed memory address for the entire program runtime. They are not owned by any particular part of the code.
  3. References to Compile-Time Constants: let r = &42; These do not follow standard ownership rules.
  4. Temporary Values: println!("{}", String::from("hello")); Created and destroyed within the expression. Not bound to any variable or subject to normal ownership rules.
  5. Primitive Types: Types that implement the Copy trait (like integers, booleans) are copied rather than moved.

Borrowing

Borrowing allows you to refer to a value without taking ownership. There are two types of borrows:

  1. Read-only borrows: Multiple read-only borrows are allowed simultaneously.
  2. Mutable borrows: Only one mutable borrow is allowed at a time.

Here's an example:

fn main() {
    let mut s = String::from("hello");

    let r1 = &s;  // read-only borrow
    let r2 = &s;  // another read-only borrow
    println!("{} and {}", r1, r2);

    let r3 = &mut s;  // mutable borrow
    r3.push_str(", world");
    println!("{}", r3);
}
Enter fullscreen mode Exit fullscreen mode

This borrowing system allows Rust to prevent data races at compile-time, a significant advantage over many other programming languages.

Data Types and Memory Allocation

Understanding how different data types are allocated in memory is crucial for writing efficient Rust code.

Primitive Data Types

Categories of Primitive Types

  1. Scalar Types:
    • Integers: i8, i16, i32, i64, i128, isize, u8, u16, u32, u64, u128, usize
    • Floating-point: f32, f64
    • Boolean: bool
    • Character: char
  2. Compound Types:
    • Arrays: [T; N] where T is any type and N is a compile-time constant
    • Slices: &[T] and &mut [T]
    • Tuples: (T, U, ...) where T, U, etc. can be any types
    • String slices: &str
  3. Pointer Types:
    • References: &T and &mut T
    • Raw pointers: *const T and *mut T
    • Function pointers: fn()

The Copy Trait

  • Primitive types above implement the Copy trait.
  • Arrays [T; N] implement Copy if T implements Copy.
  • Slices &[T], references &T, and string slices &str always implement Copy, regardless of T.

Storage Location

  • Primitive types can be stored on either the stack or the heap.
  • Non-primitive types can also be stored on either the stack or the heap.
// Primitive type on the heap
let boxed_int: Box<i32> = Box::new(5);

// Array (primitive) on the heap
let boxed_array: Box<[i32; 3]> = Box::new([1, 2, 3]);

// Non-primitive type on the stack
struct Point { x: i32, y: i32 }
let point = Point { x: 0, y: 0 };  // Stored on the stack
Enter fullscreen mode Exit fullscreen mode

Memory Layout

Primitive types typically have a fixed, known size at compile-time. This allows for efficient stack allocation and direct manipulation. However, this doesn't mean they're always stack-allocated. The context of use determines the actual storage location. Some scenarios where a typically stack-allocated value might end up on the heap include:

  • When it's part of a larger data structure that's heap-allocated. For example, if a primitive type is stored in a Vec or Box, it will be on the heap along with the rest of the data structure.
  • When it's used in a closure that outlives the current stack frame.
  • When it's returned from a function as part of a heap-allocated structure.

Compound Data Types

Array

Arrays in Rust have a fixed size known at compile time and are stored on the stack. This is different from some popular languages like Python or JavaScript, where arrays (or lists) are dynamically sized and heap-allocated. In Rust:

let arr: [i32; 5] = [1, 2, 3, 4, 5];
Enter fullscreen mode Exit fullscreen mode

This array is entirely stack-allocated, which can lead to very efficient memory use and access patterns for fixed-size collections.

Tuples

Tuples store values of different types and are allocated on the stack. They're laid out in memory contiguously, with potential padding for alignment. For example:

let tup: (i32, f64, u8) = (500, 6.4, 1);
Enter fullscreen mode Exit fullscreen mode

In memory, this tuple might look like:

[4 bytes for i32][4 bytes padding][8 bytes for f64][1 byte for u8][7 bytes padding]
Enter fullscreen mode Exit fullscreen mode

The padding ensures that each element is properly aligned in memory.

Structs

Structs can be named or tuple-like. They are typically allocated on the stack, but their contents can be on the heap if they contain types like String or Vec. Their memory layout is similar to tuples, including potential padding. For example:

struct Point {
    x: i32,
    y: i32,
}

let p = Point { x: 0, y: 0 };
Enter fullscreen mode Exit fullscreen mode

Enums

Enums are stored as a discriminant (usually an integer) to indicate which variant it is, plus enough space to store the largest variant. This allows Rust to optimise memory usage while providing type safety. The memory allocation can be more complex than it first appears:

enum Message {
    Quit,
    Move { x: i32, y: i32 },
    Write(String),
    ChangeColor(i32, i32, i32),
}
Enter fullscreen mode Exit fullscreen mode

In this enum:

  • Quit doesn't need any extra space beyond the discriminant.
  • Move needs space for two i32 values.
  • Write needs space for a String, which is a pointer to heap memory.
  • ChangeColor needs space for three i32 values.

The enum will allocate enough space for the largest variant (likely ChangeColor in this case), plus the discriminant. This means even the Quit variant will use the same amount of memory as ChangeColor, but this approach allows for very fast matching and prevents the need for heap allocations for the enum itself.

Dynamic Data Types

Vector

Vectors are resizable and store their data on the heap. They keep track of capacity and length:

let mut vec: Vec<i32> = Vec::new();
vec.push(1);
vec.push(2);
Enter fullscreen mode Exit fullscreen mode

Slice

Slices are views into elements of an array or vector. They use a fat pointer for reference, containing both a pointer to the data and the length. A fat pointer is a pointer that carries additional information beyond just the memory address. In the case of a slice, the fat pointer contains:

  1. A pointer to the first element of the slice in memory
  2. The length of the slice

This additional information allows Rust to perform bounds checking and iterate over the slice efficiently without needing to store this information separately or query it at runtime.

let slice: &[i32] = &arr[1..3];
Enter fullscreen mode Exit fullscreen mode

String

Strings in Rust are similar to vectors but are guaranteed to be UTF-8 encoded. This guarantee means:

  1. Each character in the string is represented by a valid UTF-8 byte sequence.
  2. The string can contain any Unicode character, but they're stored efficiently.
  3. String operations (like indexing) work on UTF-8 boundaries, not raw bytes.

This UTF-8 guarantee allows Rust to provide safe and efficient string handling, avoiding issues like invalid byte sequences or incorrect character boundaries that can occur in languages with less strict string encodings.

let s = String::from("hello");
Enter fullscreen mode Exit fullscreen mode

Stack Allocation in Rust

In Rust, stack allocation is not limited to primitive types or small objects. Rust allows for stack allocation of objects of arbitrary complexity and size, subject only to the stack size limit. This is indeed different from many other programming languages and is an important feature of Rust's memory model. More read Stack and Heap

Key Points:

  1. Arbitrary Complexity: In Rust, you can allocate structs, enums, arrays, and other complex types on the stack, not just primitives.
  2. Size Flexibility: As long as the size is known at compile time and doesn't exceed the stack limit, you can allocate large objects on the stack.
  3. Performance Implications: Stack allocation is generally faster than heap allocation, so this feature can lead to performance benefits.
  4. Stack Size Limit: While you can allocate complex objects on the stack, you still need to be aware of the stack size limit, which is typically much smaller than the heap.

Memory Allocators in Rust

Rust provides flexibility in choosing memory allocators. Let's explore some common options:

Standard Allocator

The standard allocator in Rust uses the system's default allocator (often the C library's malloc). It's a good general-purpose allocator but may not be the most efficient for all scenarios.

Characteristics:

  • Uses sbrk to grow the heap
  • Memory is counted towards Resident Set Size (RSS)
  • Not the fastest or most memory-efficient
  • Low memory footprint upon initialization

jemalloc

jemalloc is a popular alternative allocator known for its efficiency in multi-threaded environments.

Characteristics:

  • Uses mmap to allocate memory
  • Memory only counts towards RSS when written to
  • Efficient in managing "dirty" pages (memory freed but not returned to OS)
  • High initial memory footprint
  • Can be tuned for performance or memory efficiency for heavy workloads

To use jemalloc in your Rust project:

  1. Add it to your Cargo.toml:
   [dependencies]
   jemallocator = "0.3.2"
Enter fullscreen mode Exit fullscreen mode
  1. Set it as the global allocator in your main Rust file:
   use jemallocator::Jemalloc;

   #[global_allocator]
   static GLOBAL: Jemalloc = Jemalloc;
Enter fullscreen mode Exit fullscreen mode

Microsoft's mimalloc

mimalloc is another high-performance allocator known for its speed and low initial memory footprint.

Characteristics:

  • Very fast
  • Low initial memory footprint
  • Good choice for applications that require quick startup times

Advanced Memory Management Techniques

Using Box<T> for Heap Allocation

When you need to allocate memory on the heap explicitly, Rust provides the Box<T> type. This is useful for recursive data structures or when you need to ensure a value has a stable memory address.

fn main() {
    let b = Box::new(5);
    println!("b = {}", b);
}
Enter fullscreen mode Exit fullscreen mode

When b goes out of scope, the heap memory is automatically deallocated.

Reference Counting with Rc<T>

For scenarios where you need shared ownership of data (e.g., in graph-like structures), Rust provides Rc<T> (Reference Counted).

use std::rc::Rc;

fn main() {
    let a = Rc::new(String::from("Hello"));
    let b = a.clone();  // Increases the reference count

    println!("a: {}, b: {}", a, b);
}
Enter fullscreen mode Exit fullscreen mode

Rc<T> keeps track of the number of references to a value and only deallocates the value when the reference count reaches zero.

Atomic Reference Counting with Arc<T>

For thread-safe reference counting, Rust provides Arc<T> (Atomic Reference Counted). It's similar to Rc<T> but safe to use across multiple threads.

use std::sync::Arc;
use std::thread;

fn main() {
    let s = Arc::new(String::from("shared data"));

    for _ in 0..10 {
        let s = Arc::clone(&s);
        thread::spawn(move || {
            println!("{}", s);
        });
    }
}
Enter fullscreen mode Exit fullscreen mode

Optimising Memory Usage

Struct Layout

Rust allows you to optimise memory usage by considering struct layout. Let's look at an example and explain the paddings:

struct Efficient {
    a: i32,
    b: i32,
    c: i16,
}

struct Inefficient {
    a: i32,
    c: i16,
    b: i32,
}
Enter fullscreen mode Exit fullscreen mode

In the Efficient struct:

  • a occupies 4 bytes
  • b occupies the next 4 bytes
  • c occupies the next 2 bytes
  • Total: 10 bytes

In the Inefficient struct:

  • a occupies 4 bytes
  • c occupies the next 2 bytes
  • 2 bytes of padding are added to align b
  • b occupies the next 4 bytes
  • Total: 12 bytes

The Efficient struct uses less memory due to better alignment and less padding. The compiler adds padding to ensure that each field is aligned to its natural alignment (usually its size). By ordering fields from largest to smallest, we can often reduce the amount of padding needed.

Copy vs Clone

Understanding the difference between Copy and Clone traits can help you optimise memory usage:

  • Copy: Allows bitwise copying of values. Use for small, stack-allocated types.
  • Clone: Allows more complex copying logic. Use for heap-allocated or larger types.
#[derive(Copy, Clone)]
struct Point {
    x: i32,
    y: i32,
}

#[derive(Clone)]
struct ComplexData {
    data: Vec<i32>,
}
Enter fullscreen mode Exit fullscreen mode

Option Type Optimization

Rust's Option type is optimized to avoid null pointers. For types that cannot be null (like Box<T>), Rust uses a clever optimization where the None variant doesn't take up any extra space.

enum Option<T> {
    Some(T),
    None,
}

let x: Option<Box<i32>> = None;
Enter fullscreen mode Exit fullscreen mode

In this case, x doesn't allocate any heap memory.

Advanced Concepts

Memory Pages and Virtual Memory

Understanding how the operating system manages memory can help you write more efficient Rust code. The OS allocates memory in pages (usually 4096 bytes). When your program requests memory, it's given in multiples of these pages.

Virtual Memory allows your program to use more memory than is physically available. The OS maps virtual memory addresses to physical memory or disk storage.

Resident Set Size (RSS) vs Virtual Memory

  • Virtual Memory: The amount of memory your program can use.
  • RSS (Resident Set Size): The actual memory used by your program.

Different allocators manage these differently. For example, jemalloc uses mmap to allocate memory, which only counts towards RSS when written to.

Tuning jemalloc

jemalloc offers various tuning options:

  • Multiple arenas to limit fragmentation
  • Background cleanup threads
  • Profiling options to monitor memory usage

These can be configured through environment variables or at runtime.

Best Practices for Memory Management in Rust

  1. Use stack allocation when possible: Stack allocation is faster and doesn't require explicit deallocation.

  2. Leverage Rust's ownership system: Let Rust's ownership and borrowing rules manage memory for you whenever possible.

  3. Use appropriate data structures: Choose data structures that match your access patterns and memory requirements.

  4. Consider custom allocators for specific use cases: If your application has unique memory requirements, consider implementing a custom allocator.

  5. Profile your application: Use tools like valgrind or Rust-specific profilers to identify memory bottlenecks.

  6. Avoid premature optimization: Focus on writing clear, idiomatic Rust code first. Optimize only when necessary and after profiling.

  7. Use Box<T> for large objects or recursive data structures: This moves data to the heap, which can be more efficient for large objects.

  8. Be mindful of lifetimes: Understand and use Rust's lifetime system to ensure references remain valid.

  9. Utilize Rc<T> and Arc<T> judiciously: These types are useful for shared ownership but come with a performance cost.

  10. Consider using arena allocators for short-lived objects: This can significantly reduce allocation overhead in some scenarios.

Conclusion

Memory management in Rust is a powerful feature that sets it apart from many other programming languages. By understanding and leveraging Rust's ownership model, borrowing rules, and allocation strategies, you can write efficient, safe, and performant code.

Remember that mastering memory management in Rust is a journey. The concepts we've covered here provide a solid foundation, but there's always more to learn. Don't hesitate to dive deeper into Rust's documentation, experiment with different allocation strategies, and engage with the Rust community to further enhance your understanding.

As you continue to work with Rust, you'll become more adept at managing memory efficiently. This will lead to robust, high-performance applications that are free from many common memory-related bugs.

Keep practicing, keep learning, and embrace the challenges – they're opportunities for growth.

Sources

Top comments (0)