Skip to content

Conversation

@fogti
Copy link
Contributor

@fogti fogti commented Jan 10, 2026

I realized that by "just" factoring out all path traversal handling, I don't even need to introduce an std::path::Path equivalent.

The afaik only disadvantage that this has is the re-allocations during traversal into fuse and uhyve filesystems.

@mkroening mkroening self-assigned this Jan 10, 2026
@fogti fogti force-pushed the path branch 3 times, most recently from 19bd9ce to aef532d Compare January 10, 2026 15:13
Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Benchmark Results

Details
Benchmark Current: 6c0b8d6 Previous: e4ac074 Performance Ratio
startup_benchmark Build Time 98.97 s 98.80 s 1.00
startup_benchmark File Size 0.86 MB 0.82 MB 1.06
Startup Time - 1 core 0.95 s (±0.04 s) 0.94 s (±0.03 s) 1.00
Startup Time - 2 cores 0.94 s (±0.03 s) 0.96 s (±0.03 s) 0.98
Startup Time - 4 cores 0.98 s (±0.03 s) 0.94 s (±0.03 s) 1.03
multithreaded_benchmark Build Time 101.60 s 100.58 s 1.01
multithreaded_benchmark File Size 0.91 MB 0.97 MB 0.94
Multithreaded Pi Efficiency - 2 Threads 85.94 % (±8.07 %) 90.29 % (±8.49 %) 0.95
Multithreaded Pi Efficiency - 4 Threads 42.10 % (±3.63 %) 43.48 % (±3.34 %) 0.97
Multithreaded Pi Efficiency - 8 Threads 24.81 % (±1.47 %) 25.20 % (±1.85 %) 0.98
micro_benchmarks Build Time 93.89 s 163.78 s 0.57
micro_benchmarks File Size 0.91 MB 0.97 MB 0.94
Scheduling time - 1 thread 62.67 ticks (±2.95 ticks) 99.58 ticks (±31.86 ticks) 0.63
Scheduling time - 2 threads 35.22 ticks (±5.09 ticks) 55.55 ticks (±16.94 ticks) 0.63
Micro - Time for syscall (getpid) 4.12 ticks (±0.60 ticks) 7.42 ticks (±3.84 ticks) 0.56
Memcpy speed - (built_in) block size 4096 68722.93 MByte/s (±49083.95 MByte/s) 60347.83 MByte/s (±42940.07 MByte/s) 1.14
Memcpy speed - (built_in) block size 1048576 29656.47 MByte/s (±24257.13 MByte/s) 25535.97 MByte/s (±21900.21 MByte/s) 1.16
Memcpy speed - (built_in) block size 16777216 28612.11 MByte/s (±23818.23 MByte/s) 21177.04 MByte/s (±17856.22 MByte/s) 1.35
Memset speed - (built_in) block size 4096 69305.46 MByte/s (±49392.76 MByte/s) 60998.96 MByte/s (±43392.21 MByte/s) 1.14
Memset speed - (built_in) block size 1048576 30422.07 MByte/s (±24690.11 MByte/s) 26412.57 MByte/s (±22471.00 MByte/s) 1.15
Memset speed - (built_in) block size 16777216 29382.80 MByte/s (±24253.80 MByte/s) 21301.98 MByte/s (±17886.43 MByte/s) 1.38
Memcpy speed - (rust) block size 4096 61039.48 MByte/s (±45117.97 MByte/s) 53768.42 MByte/s (±39033.99 MByte/s) 1.14
Memcpy speed - (rust) block size 1048576 29457.03 MByte/s (±24212.74 MByte/s) 24336.19 MByte/s (±21041.44 MByte/s) 1.21
Memcpy speed - (rust) block size 16777216 28579.90 MByte/s (±23776.20 MByte/s) 20742.62 MByte/s (±17390.56 MByte/s) 1.38
Memset speed - (rust) block size 4096 61828.95 MByte/s (±45590.53 MByte/s) 54278.77 MByte/s (±39313.56 MByte/s) 1.14
Memset speed - (rust) block size 1048576 30234.34 MByte/s (±24655.96 MByte/s) 25118.65 MByte/s (±21537.32 MByte/s) 1.20
Memset speed - (rust) block size 16777216 29351.50 MByte/s (±24215.94 MByte/s) 20922.60 MByte/s (±17443.37 MByte/s) 1.40
alloc_benchmarks Build Time 91.89 s 150.20 s 0.61
alloc_benchmarks File Size 0.94 MB 0.89 MB 1.05
Allocations - Allocation success 100.00 % 100.00 % 1
Allocations - Deallocation success 100.00 % 100.00 % 1
Allocations - Pre-fail Allocations 100.00 % 100.00 % 1
Allocations - Average Allocation time 5848.19 Ticks (±229.18 Ticks) 10243.26 Ticks (±359.99 Ticks) 0.57
Allocations - Average Allocation time (no fail) 5848.19 Ticks (±229.18 Ticks) 10243.26 Ticks (±359.99 Ticks) 0.57
Allocations - Average Deallocation time 751.65 Ticks (±111.38 Ticks) 2287.98 Ticks (±940.40 Ticks) 0.33
mutex_benchmark Build Time 91.88 s 152.42 s 0.60
mutex_benchmark File Size 0.91 MB 0.97 MB 0.94
Mutex Stress Test Average Time per Iteration - 1 Threads 12.56 ns (±0.70 ns) 18.06 ns (±4.09 ns) 0.70
Mutex Stress Test Average Time per Iteration - 2 Threads 13.06 ns (±0.79 ns) 19.90 ns (±2.71 ns) 0.66

This comment was automatically generated by workflow using github-action-benchmark.

prefix: Option<String>,
attr: FileAttr,
original_prefix: Arc<str>,
prefix: String,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be a good idea to instead only store the part in prefix that is actually "beyond" original_prefix.


fn traverse_stat(&self, components: &mut Vec<&str>) -> io::Result<FileAttr> {
let path = self.traversal_path(components);
async fn stat(&self) -> io::Result<FileAttr> {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had to change this method from a recursion to a loop to avoid having to allocate a Box for the .stat (because otherwise calling .await on it inside of the function would lead to an infinitely-sized closure).

fn dup(&self) -> Box<dyn VfsNode>;

/// Traverse into a subdirectory or file
async fn traverse_once(&self, _component: &str) -> io::Result<Box<dyn VfsNode>> {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO: introduce traverse_multiple.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants