Open main menu

CDOT Wiki β

Changes

GPU621/Fighting Mongooses

840 bytes added, 14:47, 5 December 2016
Fighting Mongooses
That said, while the C++11 standard solutions are considered ‘experimental’, they are largely functional and comparable to TBB functionality in most cases in terms of efficiency.
 
==STL==
The [https://en.wikipedia.org/wiki/Standard_Template_Library Standard Template Library (STL)] is a software library for the C++ programming language that influenced many parts of the C++ Standard Library. It provides four components called algorithms, containers, functional, and iterators. As early as 2006, parallelization has been being pushed for inclusion in the STL for C++, to some success (more on this later).
 
==TBB==
TBB (Threading Building Blocks) is a high-level, general purpose, feature-rich library for implementing parametric polymorphism using threads. It includes a variety of containers and algorithms that execute in parallel and has been designed to work without requiring any change to the compiler. Uses task parallelism, Vectorization not supported.
 
==BOOST==
Since 2006 an intimate week long annual conference related to Boost called [http://cppnow.org/ C++ Now] has been held in Aspen, Colorado each May. Boost has been a participant in the annual [https://developers.google.com/open-source/soc/?csw=1 Google Summer of Code] since 2007.
 
==STD(PPL) – since Visual Studio 2015==
Visual C++ The Parallel Patterns Library (PPL) provides the following technologies to help you create multigeneral-threaded purpose containers and algorithms for performing fine-grained parallelism. The PPL enables imperative data parallelism by providing parallel programs algorithms that take advantage distribute computations on collections or on sets of data across computing resources. It also enables task parallelism by providing task objects that distribute multiple cores independent operations across computing resources. To use PPL classes and use functions, simply include the GPU for general purpose programming:ppl.h header file.  ==Concurrency Runtime==
'''Auto-Parallelization and Auto-Vectorization''' - Compiler optimizations Classes that speed up codesimplify the writing of programs that use data parallelism or task parallelism.
'''Concurrency Runtime''' - Classes that simplify the writing of programs that use data parallelism or task parallelism[[File:cRuntime.png]]
'''C++ AMP (C++ Accelerated Massive Parallelism)''' - Classes that enable the use of modern graphics processors for general purpose programming.
'''Multithreading Support for Older Code (Visual C++)''' ==Task Scheduler== The Task Scheduler schedules and coordinates tasks at run time. The Task Scheduler is cooperative and uses a work- Older technologies stealing algorithm to achieve maximum usage of processing resources.The Concurrency Runtime provides a default scheduler so that may you do not have to manage infrastructure details. However, to meet the quality needs of your application, you can also provide your own scheduling policy or associate specific schedulers with specific tasks.  ==Resource Manager== The role of the Resource Manager is to manage computing resources, such as processors and memory. The Resource Manager responds to workloads as they change at runtime by assigning resources to where they can be useful in older most effective.The Resource Manager serves as an abstraction over computing resources and primarily interacts with the Task Scheduler. Although you can use the Resource Manager to fine-tune the performance of your libraries and applications, you typically use the functionality that is provided by the Parallel Patterns Library, the Agents Library, and the Task Scheduler. For new apps, These libraries use the Concurrency Runtime or C++ AMPResource Manager to dynamically rebalance resources as workloads change.  ==Asynchronous Agents Library==  Not relevant to this comparison, .NET Framework
 
==Auto-Parallelizer==
Multiple example loops [https://msdn.microsoft.com/en-ca/library/hh872235.aspx here]
==Concurrency Runtime==
The Concurrency Runtime for C++ helps you write robust, scalable, and responsive parallel applications. It raises the level of abstraction so that you do not have to manage the infrastructure details that are related to concurrency. You can also use it to specify scheduling policies that meet the quality of service demands of your applications. Use these resources to help you start working with the Concurrency Runtime.
==C++ AMP (C++ Accelerated Massive Parallelism)==
C++ AMP accelerates the execution of your C++ code by taking advantage of the data-parallel hardware that's commonly present as a graphics processing unit (GPU) on a discrete graphics card. The C++ AMP programming model includes support for multidimensional arrays, indexing, memory transfer, and tiling. It also includes a mathematical function library. You can use C++ AMP language extensions to control how data is moved from the CPU to the GPU and back.
 
==AMP Tiling==
Tiling divides threads into equal rectangular subsets or tiles. If you use an appropriate tile size and tiled algorithm, you can get even more acceleration from your C++ AMP code. The basic components of   ==*A note on AMP and tiling are:== · tile_static variablesAMP does not properly compile on the visual studio 2015 platform, it must be run using libraries before VS2015. Access Tiling does not seem to data in tile_static memory can be significantly faster than access to data in supported on the global space (array or array_view objects)Intel Compiler as well.· ==A simple for_Each Comparison== [https[File://msdnForEachCode.microsoftPNG]] [[File:ForEachTable.com/en-ca/library/hh308384.aspx tile_barrier::wait MethodPNG]]. A call to tile_barrier::wait suspends execution of the current thread until all of the threads in the same tile reach the call to tile_barrier::wait· Local and global indexing. You have access to the index of the thread relative to the entire array_view or array object and the index relative to the tile.· tiled_extent Class and tiled_index Class. You use a tiled_extent object instead of an extent object in the parallel_for_each call. You use a tiled_index object instead of an index object in the parallel_for_each call[[File:ForEachChart.PNG]]
[[File:PBTResults.png]]
==Comparing STL/PPL to TBB: Sorting Algorithm==
The clear differentiation in the code is that TBB does not have to operate using random access iterators, while STL’s parallel solution to sorting (and serial solution) does. If TBB sort is run using a vector instead of a simple array, you will see more even times.
 
==Conclusion==
The conclusion to draw when comparing TBB to STL, in their current states, is that you ideally should use TBB over STL. STL parellelism is still very experimental and unrefined, and will likely remain that way until we see the release of C++17. However, following C++17’s release, using the native parallel library solution will likely be the ideal road to follow.
 
 
==References==
 
http://www.boost.org/
 
https://scs.senecac.on.ca/~gpu621/pages/content/tbb__.html
 
Auto-Parallelization and Auto-Vectorization: https://msdn.microsoft.com/en-ca/library/hh872235.aspx
 
Concurrency Runtime:
https://msdn.microsoft.com/en-ca/library/dd504870.aspx
 
Accelerated Massive Parallelism (AMP): https://msdn.microsoft.com/en-ca/library/hh265137.aspx
 
Using Lambdas, Function objects and Restricted functions:
https://msdn.microsoft.com/en-ca/library/hh873133.aspx
 
Using Tiles:
https://msdn.microsoft.com/en-ca/library/hh873135.aspx
 
Concurrency Runtime Overview:
https://msdn.microsoft.com/en-us/library/ee207192.aspx
----
30
edits