Skip to content
This repository was archived by the owner on Jul 15, 2023. It is now read-only.

Conversation

@mhillenbrand
Copy link
Contributor

We (@jkehne and @mhillenbrand) found an issue in how X-Mem generates its pointer array for pointer-chasing micro-benchmarks. Most of the time, the traversed linked list does not cover the complete pointer array (only ~45% on average). Thus, the actual working set for random access patterns may be less than specified using -w. With the proposed fix, we ensure that the pointer chain walks across the full array (i.e., the pointers form a hamiltonian cycle) at minimum overhead (~4% slowdown in generating the permutation).

A test harness illustrates that the cycles generated by xmem::build_random_pointer_permutation most often miss significant parts of the pointer array. The code measures the actual length of the pointer cycle in the generated pointer array, comparing the original version of xmem::build_random_pointer_permutation and our proposed alternative.

This pull request replaces the existing implementation with our proposed alternative generator.

Please see the commit message below for more details.

xmem::build_random_pointer_permutation weaves a random path through a
pointer array, thereby generating a linked list feasible for
latency-sensitive pointer-chasing in memory micro-benchmarks.

The resulting permutation should touch the whole pointer array to
maximize the working set of the benchmark thread. In CS terms, the
pointers should form a hamiltonian cycle in the graph formed by the
pointer array. However, the build_random_pointer_permutation often fails
to construct a hamiltonian cycle. As a result, a benchmark thread's
working set might become much smaller than desired, thereby distorting
cache miss rates and observed average latencies.

Change xmem::build_random_pointer_permutation to always produce
hamiltonian cycles in a cheap way:
* Represent the traversal order of pointers in an int array of indices
* Shuffle the indices in a way that maintains a hamiltonian cycle
  starting and ending at 0.
* Impose the traversal order on the pointer array (specific per chunk
  size).

The changes slow build_random_pointer_permutation down by only ~4%
(measured with 64-bit chunks in 32 MiB array on an i7-3530M @ 4 MiB LLC)
but ensure the full array as working set all the time.
@mgottscho
Copy link
Contributor

Hi Marius,

Thank you for this pull request and detailed description of the problem. And I apologize for my delayed response!

I will definitely take a look. I am actually already aware of this problem and had yet to fix it. In an early version of X-Mem, I believe I actually did have a Hamiltonian Cycle version of this function. If I recall correctly, I had a compile-time option to allow users to select between Hamiltonian and the shuffle algorithms. I don't recall why I removed this ability. It was probably due to long runtimes on large working sets. Let me re-investigate my notes and figure out the original reason. Otherwise, I agree, we should change this back to Hamiltonian Cycle to be more correct.

Mark

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants