Open main menu

CDOT Wiki β

Changes

DPS921/OpenACC vs OpenMP Comparison

1,444 bytes added, 18:26, 30 November 2020
m
no edit summary
== Progress ==
- Nov 9, 2020: Added project description
- Nov 13, 2020: Determine content sections to be discussed
- Nov 18, 2020: Successful installation of required compiler and compilation of OpenACC code
- Nov 19, 2020: Adding MPI into discussion
= OpenACC =
== Installation ==
Originally, OpenACC compilation is supported by the PGI compiler which requires an expensive subscription, there has been new options in recent years.
=== Nvidia HPC SDK[https://developer.nvidia.com/hpc-sdk] ===
This tells which loops are parallelized with line numbers for reference.
For Windows users that would like to try this SDK, WSL2 is one way option. WSL2 does not fully support this SDK at this moment, due to the fact that most virtualization technologies cannot let virtualized systems use the graphic card directly. Nvidia had released a preview driver[https://developer.nvidia.com/cuda/wsl] that allows the Linux subsystem to recognize graphic cards installed on the machine, it allows WSL2 users to compile programs with CUDA toolkits but not with the HPC SDK yet. ==== Nvidia CUDA WSL2 ====We are not going to goover how to deal with CUDA on WSL2. We included the installation guide for using CUDA on WSL2 here for anyone's interest [https://docs.nvidia.com/cuda/wsl-user-guide/index.html#running-cuda]. Note that you need to have registered in the Windows Insider Program to get one of the preview Win10 versions.
=== GCC[https://gcc.gnu.org/wiki/OpenACC] ===
</source>
 
= Performance Comparison =
 
== Jacobi Iteration ==
 
= Collaboration =
 
== OpenACC with OpenMP ==
OpenMP and OpenACC can be used together
 
== OpenACC with MPI ==
As we learned that MPI is used to allow communication and data transfer between threads during parallel execution. In the case of multiple accelerators, one of the ways we can use the two together is to use MPI to communicate between different accelerators.
36
edits