Skip to content

Fix low-severity robustness and accuracy issues#22

Closed
rfxn wants to merge 1 commit intomainfrom
fix/low-severity-improvements
Closed

Fix low-severity robustness and accuracy issues#22
rfxn wants to merge 1 commit intomainfrom
fix/low-severity-improvements

Conversation

@rfxn
Copy link
Collaborator

@rfxn rfxn commented Feb 19, 2026

Summary

  • Platform-aware fio ioengine: Uses libaio on Linux, falls back to posixaio on other platforms (e.g. macOS, containers without libaio)
  • Memory bandwidth accuracy: Averages over 10 copy iterations instead of a single measurement, reducing variance from OS jitter — consistent with how other benchmarks (tensor cores, GPU compute) use multiple iterations
  • Multi-iteration labeling: When --num-iterations > 1, each result row now shows (iter N) in the task name so runs are distinguishable in the output table
  • GPU memory unit fix: GPUtil returns memory in MiB; previously divided by 1024 (giving GiB mislabeled as GB). Now converts MiB→GB via MiB × 1048576 / 1e9, consistent with the CUDA device properties path that uses bytes / 1e9
  • JSON serialization safety: Added a default handler to json.dumps for numpy integer/float/array types and torch.dtype, preventing TypeError crashes on non-serializable result values

Test plan

  • Run python gpubench.py --memory-bandwidth and verify bandwidth output is stable across runs
  • Run python gpubench.py --disk-io on Linux and confirm fio uses libaio
  • Run python gpubench.py --num-iterations 2 --cpu-single-thread and verify iteration labels appear in table
  • Run python gpubench.py --json --cpu-single-thread and verify JSON output parses without errors
  • Compare GPU memory values in system overview against nvidia-smi output

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 7596150e95

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

- Use platform-aware fio ioengine (libaio on Linux, posixaio elsewhere)
- Average memory bandwidth over 10 iterations to reduce measurement noise
- Label benchmark results with iteration number when --num-iterations > 1
- Add JSON serializer fallback for numpy types and torch.dtype
@rfxn rfxn force-pushed the fix/low-severity-improvements branch from 7596150 to c9d898a Compare February 19, 2026 23:13
@rfxn
Copy link
Collaborator Author

rfxn commented Feb 19, 2026

@codex review

@chatgpt-codex-connector
Copy link

Codex Review: Didn't find any major issues. What shall we delve into next?

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@rfxn rfxn closed this Feb 20, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments