Home/Concurrency Models
CONCURRENCY · Last verified April 2026

Goroutines vs Tokio:
how Rust and Go handle concurrency differently.

The mental model

GO GOROUTINES

M:N green threads

  • Runtime-managed thread pool (M OS threads, N goroutines)
  • Preemptively scheduled (goroutines can be interrupted anywhere)
  • ~2KB stack, grows automatically up to 1GB default limit
  • Channels are first-class language primitives
  • go keyword spawns. select for multi-channel operations
  • context.Context for cancellation, deadlines, request-scoped values
RUST TOKIO / ASYNC

Stackless coroutines

  • Futures polled cooperatively by a runtime (Tokio work-stealing scheduler)
  • Stackless: state machine compiled into the binary, no stack allocation per task
  • async/await syntax; functions must be marked async
  • tokio::spawn spawns tasks; JoinSet for collections
  • tokio::sync::mpsc for channels; tokio::select! for multi-future ops
  • CancellationToken for structured cancellation; drop-based cleanup

100k concurrent tasks: memory comparison

Source: Mauwia Atif, multithreading benchmark (Medium)

TOKIO (RUST)
~300 MB
For 100,000 concurrent Tokio tasks. ~3KB per task overhead.
GOROUTINES (GO)
~800 MB
For 100,000 goroutines. ~8KB per goroutine minimum stack.
At 1M tasks, Tokio's advantage compounds. For most production services running hundreds or low thousands of concurrent connections, Go goroutines are entirely practical.

Spawning concurrent work

Same task: fetch 5 URLs concurrently, collect results.

Rust / Tokio
use tokio::task::JoinSet;

#[tokio::main]
async fn main() {
    let urls = vec![
        "https://example.com/1",
        "https://example.com/2",
        "https://example.com/3",
    ];

    let mut set = JoinSet::new();

    for url in urls {
        set.spawn(async move {
            // reqwest is the de-facto HTTP client
            reqwest::get(url).await
        });
    }

    while let Some(result) = set.join_next().await {
        match result {
            Ok(Ok(resp)) => println!("{}", resp.status()),
            Ok(Err(e)) => eprintln!("error: {e}"),
            Err(e) => eprintln!("task panic: {e}"),
        }
    }
}
Go / goroutines
package main

import (
    "fmt"
    "net/http"
    "sync"
)

func main() {
    urls := []string{
        "https://example.com/1",
        "https://example.com/2",
        "https://example.com/3",
    }

    var wg sync.WaitGroup

    for _, url := range urls {
        wg.Add(1)
        go func(u string) {
            defer wg.Done()
            resp, err := http.Get(u)
            if err != nil {
                fmt.Println("error:", err)
                return
            }
            fmt.Println(resp.Status)
        }(url)
    }

    wg.Wait()
}

Where each model wins

Go goroutines win when:

High-concurrency I/O servers (thousands of simultaneous connections)
Teams new to systems programming (simpler mental model)
Services where Go's stdlib covers the concurrency patterns needed
Programs where blocking I/O is the norm, not the exception

Tokio wins when:

Predictable latency is required (no GC-triggered pauses)
Very high task counts (100k+) where goroutine overhead matters
CPU-bound parallelism mixed with async I/O
Memory-constrained environments where every byte counts
Benchmark numbers →More code comparisons →Discord's GC problem (case study) →