Difference between revisions of "GPU621/To Be Announced"
(→Means of parallelisation on GPUs) |
(→Means of parallelisation on GPUs) |
||
Line 30: | Line 30: | ||
HIP (Elena) | HIP (Elena) | ||
− | + | OpenCL (Nathan) | |
== Instructions == | == Instructions == |
Revision as of 14:33, 18 November 2020
GPU621/DPS921 | Participants | Groups and Projects | Resources | Glossary
Contents
OpenMP Device Offloading
OpenMP 4.0/4.5 introduced support for heterogeneous systems such as accelerators and GPUs. The purpose of this overview is to demonstrate OpenMP's device constructs used for offloading data and code from a host device (Multicore CPU) to a target's device environment (GPU/Accelerator). We will demonstrate how to manage the device's data environment, parallelism and work-sharing. Review how data is mapped from the host data environment to the device data environment, and attempt to use different compilers that support OpenMP offloading such as LLVM/Clang or GCC.
Group Members
1. Elena Sakhnovitch
2. Nathan Olah
3. Yunseon Lee
Progress
Difference of CPU and GPU for parallel applications (Yunseon)
Latest GPU specs (Yunseon)
AMD:
NVIDIA:
Means of parallelisation on GPUs
short introduction and advantages and disadvantages of:
CUDA (Yunseon)
OpenMP (Elena)
HIP (Elena)
OpenCL (Nathan)
Instructions
How to set up compiler and target offloading for windows, on NVIDIA GPU: (Nathan)
How to set up compiler and target offloading for Linux on AMD GPU: (Elena)