Skip to content

Conversation

@michaelneuder
Copy link
Collaborator

No description provided.

@michaelneuder michaelneuder marked this pull request as draft April 22, 2021 19:32
@michaelneuder michaelneuder changed the base branch from main to parallel April 22, 2021 19:33
@michaelneuder
Copy link
Collaborator Author

Currently, the initialization of the mesh on four processors seems close to done. I created new files for the MPI time loop and time integration, because it is going to be a lot of restructuring the code. Current output shows initialization:

 Ny =           48  divided among            4  processors ->           12  rows per processor.
 processor            0 initialized with           12 rows.
 processor            1 initialized with           12 rows.
 processor            2 initialized with           12 rows.
 imex_rk_MPI from proc 000
 processor            3 initialized with           12 rows.
 imex_rk_MPI from proc 001
 imex_rk_MPI from proc 002
 imex_rk_MPI from proc 003

From the vtk outputs we can see that the grid is split among the four processors.

Screen Shot 2021-04-24 at 10 49 56 AM

@michaelneuder
Copy link
Collaborator Author

I had to add a dxdydz_MPI subroutine that takes correctly accounts for spacing across processors. This involves sending and receiving MPI messages with boundary grid parameters. I checked for correctness by plotting the output of dynu after the function call. See below image.

download-1

@michaelneuder
Copy link
Collaborator Author

Realized that this is a better representation of the previous figure.

download-2

@michaelneuder
Copy link
Collaborator Author

I had to create a MPI version of y_mesh_params, which I called y_mesh_params_MPI. This handles the construction of g1,g2,g3,h1,h2,h3 from the values of dynu. I verified that the outputs are identical to the serial version. Visually, these look like:

download

@michaelneuder
Copy link
Collaborator Author

Ok, after a bit of a battle, I have the phi1 and phi2 matrices initialized on each node. This was particularly challenging because the initialization requires a linear solve of a tridiagonal system which required all of the g1,g2,g3 values. To address this I have each worker node send their portion of the g vectors to the main node, where the solve is completed. Then each column of the phi1 and phi2 matrices is sent back to the worker nodes. I confirmed the correctness of the resulting matrices. Visually, the plots below represent the data split.

download

download-1

@michaelneuder
Copy link
Collaborator Author

Similarly, I was able to initialize the V1, V2 matrices in a distributed fashion with some more message passing. I confirmed the matrices are identical. Below are the plots for single and 4 node breakdown.

download-2

download-3

@michaelneuder
Copy link
Collaborator Author

Added support for the wall derivatives. Since these are calculated at the top and bottom of the grid, the bottom derivatives are held by the first processor, and the top derivatives are held by the last processor. This is simple enough to implement.

The figures for the 4 derivatives on a single node vs split across the top and bottom node are below.

download-4

download-5

@michaelneuder
Copy link
Collaborator Author

Have been making great progress. At this point we are at the end of stage 1, with all of the variables correctly calculated across the four nodes. This can be seen in the following images.

download

download-1

download-2
download-3
download-4
download-5
download-6
download-7
download-8
download-9
download-10
download-11

@michaelneuder
Copy link
Collaborator Author

Ok! Now we are through all three stages and the update solutions. The following images show T and Phi on the distributed memory implementation after the update has been applied.

download-12
download-13
download-14
download-15

@michaelneuder
Copy link
Collaborator Author

And finally, we have the end ux, uy updates complete!

download
download-1
download-2
download-3

@michaelneuder
Copy link
Collaborator Author

Finished the MPI version, and I think everything is correct. Below is a screenshot from the Temperature profile. Still need to verify the nusselt number from the distributed vtk files.

IMG_4305.mov

Screen Shot 2021-04-30 at 1 25 40 PM

Screen Shot 2021-04-30 at 1 25 30 PM

@michaelneuder michaelneuder changed the base branch from parallel to parallel_project May 6, 2021 12:57
@michaelneuder michaelneuder changed the base branch from parallel_project to parallel May 6, 2021 12:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant