Skip to content

Comments

Triton current scaling: avoid casting amax input#458

Open
matthiasdiener wants to merge 3 commits intodevfrom
mdiener/triton-cast
Open

Triton current scaling: avoid casting amax input#458
matthiasdiener wants to merge 3 commits intodevfrom
mdiener/triton-cast

Conversation

@matthiasdiener
Copy link
Contributor

@matthiasdiener matthiasdiener commented Feb 20, 2026

Description

Suggested by @ipanfilo

Type of change

  • Documentation change (change only to the documentation, either a fix or a new content)
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Infra/Build change
  • Code refactoring

Changes

Please list the changes introduced in this PR:

  • Change A
  • Change B

Checklist:

  • I have read and followed the contributing guidelines
  • The functionality is complete
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

@matthiasdiener matthiasdiener self-assigned this Feb 20, 2026
@matthiasdiener matthiasdiener changed the title Triton current scaling: avoid casting input to amax Triton current scaling: avoid casting amax input Feb 20, 2026
@matthiasdiener matthiasdiener marked this pull request as ready for review February 23, 2026 16:39
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adjusts Triton current-scaling amax reduction to avoid casting the loaded input tile to float32, instead casting only the reduced tile_amax before atomic/store. This targets current-scaling behavior in the Triton cast+transpose path.

Changes:

  • Remove tl.float32 cast on tl.load(...) in amax-reduction kernels.
  • Cast tile_amax to tl.float32 only at the atomic/store sites.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +56 to +59
a = tl.load(A_ptrs, mask=mask, other=0)
tile_amax = tl.max(tl.abs(a))
# accumulate tile-wise max into global amax
tl.atomic_max(amax_ptr, tile_amax, sem='relaxed')
tl.atomic_max(amax_ptr, tile_amax.to(tl.float32), sem='relaxed')
Copy link

Copilot AI Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR description still lists placeholder items (“Change A”, “Change B”) and doesn’t describe the actual change (moving float32 cast from load to the reduced amax result). Please update the description to match what this PR does so reviewers/users can understand intent and impact.

Copilot uses AI. Check for mistakes.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please update copyright date

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, done in 9492fc5

Copy link
Collaborator

@wenchenvincent wenchenvincent left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please run level 3 CI tests before merging.

@matthiasdiener
Copy link
Contributor Author

Please run level 3 CI tests before merging.

I just started a level 3 test here: https://github.com/ROCm/TransformerEngine/actions/runs/22327178479

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants