DEV Community

Cover image for Implement rate limit in Golang
JackTT
JackTT

Posted on • Edited on

Implement rate limit in Golang

#go

This is a short code that demo how to implement rate limit in Golang.

package main

import (
    "fmt"
    "sync"
    "time"
)

var limit = 10

var rate = time.Tick(time.Second / time.Duration(limit))

func main() {
    totalRequests := 100
    var wg sync.WaitGroup
    wg.Add(totalRequests)

    // Rate limit to 10 requests per second
    for i := 0; i < totalRequests; i++ {
        go func(i int) {
            defer wg.Done()
            sendRequest(i + 1)
        }(i)
    }

    // Wait until all request are completed
    wg.Wait()
}

func sendRequest(i int) {
    <-rate // Wait for the next tick
    fmt.Printf("Completed request %d at %d\n", i, time.Now().Unix())
}

Enter fullscreen mode Exit fullscreen mode

In the above code, The rate variable is a channel of type time.Time that ticks at the desired rate. It is calculated by dividing the duration of a second by the limit.

The rate channel will be locked until the ticker is reached, which means that each goroutine invoking <-rate will be blocked until the next tick occurs. This ensures that the requests are spaced out evenly and adhere to the desired rate of 10 requests per second.

If the goal is to perform many requests concurrently while maintaining a specific rate limit, you can consider the solution that's provided by @davidkroell :

package main

import (
    "fmt"
    "sync"
    "time"
)

var limit = 10

var bucket = make(chan struct{}, limit)
var startTime = time.Now()

func main() {
    totalRequests := 100
    var wg sync.WaitGroup
    wg.Add(totalRequests)

    go func() {
        for {
            // the bucket refill routine
            for i := 0; i < limit; i++ {
                bucket <- struct{}{}
            }
            time.Sleep(time.Second)
        }
    }()

    // Rate limit to 10 requests per second
    for i := 0; i < totalRequests; i++ {
        go func(i int) {
            defer wg.Done()
            sendRequest(i + 1)
        }(i)
    }

    // Wait until all request are completed
    wg.Wait()
}

func sendRequest(i int) {
    <-bucket // get "token" from the bucket
    fmt.Printf("Completed request %3d at %s\n", i, time.Since(startTime))
}
Enter fullscreen mode Exit fullscreen mode

Top comments (2)

Collapse
 
davidkroell profile image
David Kröll

Hi, thanks for sharing! I noticed some issues with your code and try to suggest a solution here. Talking about the design of this rate limiter, I'd say it's rather a 1 request per 100ms than a 10 req/sec rate limiter, as you can see in a slightly modified version here:

I use time.Since() to calculate when a request is handled. You see in the output that every 100ms a request is handled. If that is the desired behaviour, it's fine. On the other hand, using this solution, coping with peaks in the requests is not possible. For example doing 5 requests concurrently still requires 500ms, although there should be 10 per second allowed.

In addition, memory from the rate variable is never cleaned up. That's not a problem here (because it's a global variable and lives until the program does), but I just wanted to mention it.

package main

import (
    "fmt"
    "sync"
    "time"
)

var limit = 10

var rate = time.Tick(time.Second / time.Duration(limit))
var startTime = time.Now()

func main() {
    totalRequests := 100
    var wg sync.WaitGroup
    wg.Add(totalRequests)

    // Rate limit to 10 requests per second
    for i := 0; i < totalRequests; i++ {
        go func(i int) {
            defer wg.Done()
            sendRequest(i + 1)
        }(i)
    }

    // Wait until all request are completed
    wg.Wait()
}

func sendRequest(i int) {
    <-rate // Wait for the next tick
    fmt.Printf("Completed request %3d at %s\n", i, time.Since(startTime))
}
Enter fullscreen mode Exit fullscreen mode

I done a slight refactoring using the token-bucket rate limiting approach with channels. The bucket is a buffered channel with 10 entries and every request receives a token from it. In another goroutine, the bucket is filled by 10 items every second. If there is no token left, the request blocks until it's filled up again. This solutions handles peaks very well as you can see in the output. A batch of 10 requests is handled the same time and then everything waits for a second.

package main

import (
    "fmt"
    "sync"
    "time"
)

var limit = 10

var bucket = make(chan struct{}, limit)
var startTime = time.Now()

func main() {
    totalRequests := 100
    var wg sync.WaitGroup
    wg.Add(totalRequests)

    go func() {
        for {
            // the bucket refill routine
            for i := 0; i < limit; i++ {
                bucket <- struct{}{}
            }
            time.Sleep(time.Second)
        }
    }()

    // Rate limit to 10 requests per second
    for i := 0; i < totalRequests; i++ {
        go func(i int) {
            defer wg.Done()
            sendRequest(i + 1)
        }(i)
    }

    // Wait until all request are completed
    wg.Wait()
}

func sendRequest(i int) {
    <-bucket // get "token" from the bucket
    fmt.Printf("Completed request %3d at %s\n", i, time.Since(startTime))
}
Enter fullscreen mode Exit fullscreen mode
Collapse
 
jacktt profile image
JackTT

@davidkroell thank you for your incredibly useful reply.
You're right. My solution may be suitable in some cases, but if the goal is to perform many requests concurrently while maintaining a specific rate limit, then your solution is excellent.