Skip to content

Unexpected results with Memory Coalescing  #5

@khizar-anjum

Description

@khizar-anjum

Hi, I am using the following system configuration:

  • Windows 10
  • Visual Studio 2019 Community
  • Cuda 10.2
  • Nvidia Nsight Compute 2019.5.0
  • Nvidia RTX 2060 GPU (Turing Architecture)

I am following your tutorials on YouTube and used the file alignment_matrix_mul.cu, in three configuartions:

  • No transpose (just as we were doing it before)
  • Transpose a matrix (temp_sum += a[k * n + row] * b[col + n * k];)
  • Transpose b matrix (temp_sum += a[k + n * row] * b[col * n + k];)

We would expect that the GPU would perform best when we transpose matrix a, as the memory accesses for each thread are coalesced in this way, but the profiling shows that it performs better when I transpose matrix b.

The only thing that I am doing different here is that I am using Nsight Compute as a separate application to profile the built binary from Visual Studio and not the inbuilt extension. I am also attaching the performance images I got:

I have double checked the transpositions and this is what I get. Can there be any other bottleneck causing these results? i.e. the cost of fetching multiple elements for the loop (index k) overpowers the coalesced access?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions