Difference between revisions of "GPU621/To Be Announced"
(→Sourses) |
|||
Line 46: | Line 46: | ||
== Conclusions (Nathan/Elena/Yunseon) == | == Conclusions (Nathan/Elena/Yunseon) == | ||
− | == | + | == Sources == |
Revision as of 14:34, 18 November 2020
GPU621/DPS921 | Participants | Groups and Projects | Resources | Glossary
Contents
OpenMP Device Offloading
OpenMP 4.0/4.5 introduced support for heterogeneous systems such as accelerators and GPUs. The purpose of this overview is to demonstrate OpenMP's device constructs used for offloading data and code from a host device (Multicore CPU) to a target's device environment (GPU/Accelerator). We will demonstrate how to manage the device's data environment, parallelism and work-sharing. Review how data is mapped from the host data environment to the device data environment, and attempt to use different compilers that support OpenMP offloading such as LLVM/Clang or GCC.
Group Members
1. Elena Sakhnovitch
2. Nathan Olah
3. Yunseon Lee
Progress
Difference of CPU and GPU for parallel applications (Yunseon)
Latest GPU specs (Yunseon)
AMD:
NVIDIA:
Means of parallelisation on GPUs
short introduction and advantages and disadvantages of:
CUDA (Yunseon)
OpenMP (Elena)
HIP (Elena)
OpenCL (Nathan)
Instructions
How to set up compiler and target offloading for windows, on NVIDIA GPU: (Nathan)
How to set up compiler and target offloading for Linux on AMD GPU: (Elena)