Benchmarks for concurrent hash map implementations in Go

78 points - yesterday at 6:28 PM

Source

Comments

tl2do today at 10:27 PM
I ran benchmarks comparing xsync.Map's memory allocation against orcaman/concurrent-map.

Pure overwrite workload (pre-allocated values): xsync.Map: 24 B/op 1 alloc/op 31.89 ns/op orcaman/concurrent-map: 0 B/op 0 alloc/op 70.72 ns/op

Real-world mixed (80% overwrites, 20% new): xsync.Map: 57 B/op 2 allocs/op 218.1 ns/op orcaman/concurrent-map: 63 B/op 3 allocs/op 283.1 ns/op

Go maps reuse memory on overwrites, which is why orcaman achieves 0 B/op for pure updates. xsync's custom bucket structure allocates 24 B/op per write even when overwriting existing keys.

At 1M writes/second with 90% overwrites: xsync allocates ~27 MB/s, orcaman ~6 MB/s. The trade is 24 bytes/op for 2x speed under contention. Whether this matters depends on whether your bottleneck is CPU or memory allocation.

Benchmark code: standard Go testing framework, 8 workers, 100k keys.

withinboredom today at 9:29 PM
Looks good! There's an important thing missing from the benchmarks though:

- cpu usage under concurrency: many of these spin-lock or use atomics, which can use up to 100% cpu time just spinning.

- latency under concurrency: atomics cause cache-line bouncing which kills latency, especially p99 latency

deleted today at 10:27 PM
candiddevmike today at 10:37 PM
Idk why but I tend to shy away from non std libs that use unsafe (like xsync). I'm sure the code is fine, but I'd rather take the performance hit I guess.
vanderZwan today at 8:50 PM
I don't write Go but respect to the author for trying to list trade-off considerations for each of the implementations tested, and not just proclaim their library the overal winner.
eatonphil today at 8:56 PM
Will we also eventually get a generic sync.Map?