Vec vs VecDeque in Rust: When Your Queue Choice Actually Matters
Rust's Vec and VecDeque both store elements in order. Both have a drain method with the same signature. But draining 10 items from the front of a 100-element Vec silently shifts the remaining 90 in memory, while VecDeque moves zero.
Table of Contents
- The Problem: Paginated APIs and Scroll Latency
- What's a Vec?
- What Happens When You Drain from the Front
- Quick Primer: What Does "O(n)" Mean?
- What's a VecDeque?
- The Implementation
- How Rust Optimized This Further
- The Standard Library Performance Table
- When to Use Which
- Takeaways
The Problem: Paginated APIs and Scroll Latency
Say you're building a terminal app that lets you browse GitHub repositories by topic. You search "rust" and get a scrollable table of results. The GitHub API returns 10 repos per page, and each page takes around 300ms to fetch.
Without any buffering, this is what happens:
User scrolls to row 10
→ App fetches page 2 from GitHub API
→ 300ms wait...
→ User sees results
User scrolls to row 20
→ App fetches page 3
→ 300ms wait...
→ User sees results
Every page boundary is a 300ms freeze. The UI locks up while the user waits.
The Fix: A Prefetch Buffer
The app runs background tasks that fetch pages ahead of time and store them in a buffer. When the user scrolls past the visible list, a batch gets pulled from the buffer instantly. No network wait.
The buffer continuously refills itself. After a drain brings it below 100 repos, it kicks off another background fetch. The user never notices because the refill happens while they're still reading.
This creates a cycle:
User scrolls past bottom
→ drain 10 repos from front of buffer (instant)
→ append them to display list
→ buffer drops below 100
→ background fetch starts for next API page
→ API page arrives, 10 repos pushed to back of buffer
→ buffer restored to ~100
→ ready for next scroll
This buffer needs two operations to be fast: push to the back (API pages arrive) and drain from the front (user scrolls). That's a FIFO queue. First in, first out.
So which Rust data structure should back it?
What's a Vec?
If you've used arrays or lists in any language, you already know what a Vec is. Rust's growable array. A contiguous block of memory where elements sit next to each other in order.
let mut items = Vec::new();
items.push("first"); // ["first"]
items.push("second"); // ["first", "second"]
items.push("third"); // ["first", "second", "third"]
Adding to the end is fast. The new element goes into the next open slot. Reading any element by index is fast too, since elements are contiguous in memory, jumping to position 5 or position 5000 takes the same time.
So what's the problem?
What Happens When You Drain from the Front
Say you have 100 items in a Vec and you want to remove the first 10:
Before drain:
memory: [A][B][C][D][E][F][G][H][I][J][K][L] ... [CV]
0 1 2 3 4 5 6 7 8 9 10 11 99
drain(..10) → remove A through J
After drain:
Step 1: collect A-J into output vec (the 10 items you wanted)
Step 2: shift K through CV left by 10 positions
memory: [K][L][M][N][O][P][Q][R][S][T][U][V] ... [CV]
0 1 2 3 4 5 6 7 8 9 10 11 89
90 elements moved via ptr::copy (memmove)
To produce that "after" state, Rust has to shift every remaining element to the left. All 90 of them. Vec::drain(..n) uses ptr::copy (Rust's equivalent of C's memmove) to shift the tail back to index 0.
Why? Because a Vec guarantees contiguous memory. Element 0 has to be at the start. If you remove the items at the start, everything else slides over to fill the gap.
With 1,000 items, that's 990 shifting. With 10,000, that's 9,990. The cost grows with the size of the buffer, not the size of the batch you're draining.
Quick Primer: What Does "O(n)" Mean?
When someone says an operation is O(n), they mean its cost grows proportionally with n.
-
O(1): "constant time." Whether you have 10 items or 10 million, the operation takes roughly the same amount of work. Example: looking up a value by index in an array.
-
O(n): "linear time." The cost scales with the number of items. Double the items, roughly double the time. Example: searching through an unsorted list.
For Vec's drain: removing 10 items from the front of a 100-item Vec costs O(90), because 90 elements shift. From a 10,000-item Vec, it's O(9,990). The batch size stays at 10, but the cost keeps growing with the buffer size.
What's a VecDeque?
VecDeque stands for "double-ended queue." Rust's standard library describes it as "a double-ended queue implemented with a growable ring buffer."
A ring buffer is an array that wraps around. Instead of always starting at index 0, it tracks a head pointer (where data starts) and a tail pointer (where data ends). When the head advances past the end of the array, it wraps back to the beginning.
Push to back:
memory: [_][_][_][D][E][F][G][H][_][_]
^head ^tail
push "I" → place at tail, advance tail
memory: [_][_][_][D][E][F][G][H][I][_]
^head ^tail
Drain from front:
memory: [_][_][_][D][E][F][G][H][I][_]
^head ^tail
drain "D" → advance head
memory: [_][_][_][_][E][F][G][H][I][_]
^head ^tail
Zero elements moved. Just a pointer update.
When you drain from the front, nothing moves. The head pointer advances. O(1).
When you push to the back, the element goes at the tail position, and the tail pointer advances. Also O(1).
The buffer size doesn't matter. Nothing shifts.
The Implementation
The prefetch buffer in the app looks like this:
use std::collections::VecDeque;
pub struct PrefetchBuffer {
repos: VecDeque<Repository>,
cursor: Option<PageCursor>,
in_flight: bool,
}
New API pages arrive at the back:
pub fn receive(&mut self, repos: Vec<Repository>, cursor: PageCursor) {
self.repos.extend(repos);
self.cursor = Some(cursor);
self.in_flight = false;
}
Batches drain from the front when the user scrolls:
const DISPLAY_BATCH_SIZE: usize = 10;
pub fn drain_batch(&mut self) -> Vec<Repository> {
let n = DISPLAY_BATCH_SIZE.min(self.repos.len());
self.repos.drain(..n).collect()
}
That drain(..n) on a VecDeque just advances the head pointer. The 10 drained items get collected into a Vec for the display list, and the remaining buffered repos don't move at all.
If this were a Vec<Repository> instead, the code would look identical (Vec has the same drain API), but each call would shift 90 elements behind the scenes. drain_batch is the hot path, called every time the user scrolls past the bottom. With VecDeque, it's constant time regardless of buffer size.
How Rust Optimized This Further
A 2023 PR to the Rust standard library specifically optimized VecDeque::drain(..n) (draining from the front). The optimization compiles down to roughly 15-20 assembly instructions that adjust the ring buffer's head pointer. The PR showed 3-4x performance improvements for this specific pattern.
The theory lines up with the implementation. Rust's standard library has been tuned for exactly this drain-from-front pattern.
The Standard Library Performance Table
The Rust standard library docs publish the cost of every operation for each collection. Here's the relevant subset:
| get(i) | insert(i) | remove(i) | append(Vec(m)) | split_off(i) | |
|---|---|---|---|---|---|
| Vec | O(1) | O(n-i)* | O(n-i) | O(m)* | O(n-i) |
| VecDeque | O(1) | O(min(i, n-i))* | O(min(i, n-i)) | O(m)* | O(min(i, n-i)) |
| LinkedList | O(min(i, n-i)) | O(min(i, n-i)) | O(min(i, n-i)) | O(1) | O(min(i, n-i)) |
| HashMap | O(1)~ | O(1)~* | O(1)~ | N/A | N/A |
| BTreeMap | O(log(n)) | O(log(n)) | O(log(n)) | N/A | N/A |
n is the collection size, i is the index, * means amortized, ~ means expected cost.
Reading the table for our use case
The prefetch buffer drains from the front, so i = 0. Plug that into the remove(i) column:
- Vec:
O(n - i)=O(n - 0)= O(n). Cost scales with buffer size. - VecDeque:
O(min(i, n - i))=O(min(0, n))= O(0), effectively O(1). Constant, regardless of buffer size.
For a 100-item buffer draining 10 from the front: Vec does O(90) work per drain. VecDeque does O(1). Same API call, very different cost.
Two footnotes from the docs
Rust's collections never automatically shrink, so removal operations aren't amortized. That means VecDeque won't release memory after you drain items. It reuses the freed slots when new items get pushed. For a buffer that continuously fills and drains, this is exactly what you want: no allocation churn.
Where ties occur, Vec is generally faster than VecDeque. Contiguous memory means better cache locality. So if two operations have the same big-O complexity, Vec wins. VecDeque only pulls ahead when the complexity is genuinely different, like front removal: O(n) vs O(1).
When to Use Which
Use Vec when:
- You mostly push to the back and read by index
- You don't remove from the front
- You need a contiguous memory layout (e.g., converting to a slice with
as_slice())
Use VecDeque when:
- Items enter at one end and leave from the other (FIFO queue)
- You need to remove from the front without shifting everything
- You don't need contiguous memory layout
Takeaways
Vec::drain(..n) and VecDeque::drain(..n) have the same signature. One shifts every remaining element. The other moves a pointer. On a hot path like scroll-triggered draining, that gap adds up.
The collections performance table in the standard library docs lays out the costs. Check it before reaching for Vec out of habit.