Open main menu

CDOT Wiki β

Changes

Team Darth Vector

1,506 bytes added, 17:09, 17 December 2017
Business Point of View Comparison for STL and TBB
'''TEAM, use this for formatting. [https://en.wikipedia.org/wiki/Help:Cheatsheet Wiki Editing Cheat Sheet]
'''GPU621 Darth Vector: C++11 STL vs TBB Case Studies'''
''Join me, and together we can fork the problem as master and thread''
After a long period of engineering and development of the library, it obtained final approval in July 1994 to become part of the language standard.
==A comparison Comparison from STL ==In general, most of STL is intended for use within a serial environment. This however changes with C++17's introduction of parallel algorithms.
<u>'''Algorithms'''</u>
</pre>
==List <u>'''Memory Allocater'''</u>The use of TBB a memory allocator allows more precise control over how dynamic memory is created and used. It is the default memory allocation method(ie not the "new" keyword) for all STL containers==. The allocaters allow memory to scale rather then using delete and 'new' to manage a container's memory. They are defined within the header file "'''<memory>'''" and are coded as: <pre>#include <memory>
void foo(){std::allocater<type> name;}</pre> ==A Comparison from TBB=====Containers==='''<u>concurrent_queue</u>''' : This is the concurrent version of the STL container Queue. This container supports first-in-first-out data storage like its STL counterpart. Multiple threads may simultaneously push and pop elements from the queue. Queue does NOT support and front() or back() in concurrent operations(the front could change while accessing). Also supports iterations, but are slow and are only intended for debugdebugging a program. This is defined within the header "'''tbb/concurrent_queue.h'''" and is coded as: <pre>
#include <tbb/concurrent_queue.h>
//....//
tbb:concurrent_queue<typename> name; </pre>
'''<u>concurrent_vector</u>''' : This is a container class for vectors with concurrent(parallel) support. These vectors do not support insertion or erase operations but do support operations done by multiple threadssuch as push_back(). Note that when elements are inserted, they cannot be removed without calling the clear() member function on it, which removes every element in the array. The container when storing elements does not guarantee that elements will be stored in consecutive addresses in memory. This is defined within the header "'''tbb/concurrent_vector.h'''" and is coded as: <pre>
#include <tbb/concurrent_vector.h>
//...//
'''<u>concurrent_hash_map</u>''' : A container class that supports hashing in parallel. The generated keys are not ordered and there will always be at least 1 element for a key. Defined within "'''tbb/concurrent_hash_map.h'''"
==List of TBB =Algorithms=== 
<u>'''parallel_for:'''</u> Provides concurrent support for for loops. This allows data to be divided up into chunks that each thread can work on. The code is defined in "'''tbb/parallel_for.h'''" and takes the template of:
<pre>
</pre>
 
<u>'''parallel_invoke:'''</u> Provides support for parallel calling to functions provided in the arguments. It is defined within the header "'''tbb/parallel_invoke.h'''" and is coded as: <pre>
</pre>
==Lock Convoying Problem=Allocaters====What Handles memory allocation for concurrent containers. In particular is a Lock?===used to help resolve issues that affect parallel programming. Called '''scalable_allocater<type>''' and '''cache_aligned_allocater<type>'''. Defined in "'''#include <tbb/scalable_allocator.h>'''"
A Lock(also called "mutex") ==TBB Memory Allocation & Fixing Issues from Parallel Programming==TBB provides memory allocation just like in STL via the '''std::allocater''' template class. Where TBB's allocater though improves, is a method through its expanded support for programmers to secure code that when executing common issues experienced in parallel can cause multiple threads to fight for a resource/container for some operationprogramming. When threads work in parallel to complete a task with containers, there is no indication when the thread reach the container These allocaters are called '''scalable_allocater<type>''' and '''cache_aligned_allocater<type>''' and ensure that issues like '''Scalability''' and need to perform an operation on it. This causes '''False Sharing''' performance problems when multiple threads are accessing the same place. When doing an insertion on a container with threads, we must ensure only 1 thread is capable of pushing to it or else threads may fight for control. By "Locking" the container, we ensure only 1 thread accesses at any given timereduced.
To use ===False Sharing===As you may have seen from the workshop "False Sharing" a lock, you program must be working major performance hit can occur in parallel(ex #include when data that sits on the same cache line in memory is used by two threads. When threads are attempting operations on the same cache line the threads will compete for access and will move the cache line around. The time taken to move the line is a significant amount of clock cycles which causes the performance problem. Through TBB, Intel created an allocated known as '''cache_aligned_allocater<threadtype>) and should be completing something in parallel'''. When used, any objects with memory allocation from it will never encounter false sharing. Note that if only 1 object is allocated by this allocater, false sharing may still occur. You For compatability's sake(so that programmers can simply use "find c++11 locks and replace"), the cache_aligned_allocater takes the same arguments as the STL allocater. If you wish to use the allocater with #include <mutex>STL containers, you only need to set the 2nd argument as the cache_allocater object.
Code The following is an example or Picture here ^_^provided by Intel to demonstrate this:
<pre>
#include std::vector<iostreamint,cache_aligned_allocator<int>#include <thread> ;#include <mutex/pre>
//Some ===Scaling Issue===When working in parallel, several threads may be required to access shared memory which causes a performance slow down from forcing a single thread to allocate memory while other threads are spawned which call required to wait. Intel describes this function//Declared issue in parallel programming as '''Scalability''' and answers the following within issue with '''scalable_allocater<type>''' which permits concurrent memory allocation and is considered ideal for "''programs the class std::mutex NightsWatch;void GameOfThronesClass::GuardTheWall(){rapidly allocate and free memory''".
//Protect until Unlock() ==Lock Convoying Problem=====What is called. Only 1 thread may do this below at a time. It is //"locked"NightsWatch.Lock();?===
//IncrementDaysWithoutWhiteWalkerAttack++;std::cout << A Lock(also called "It has been mutex" << DaysWithoutWhiteWalkerAttack << " since ) is a method for programmers to secure code that when executing in parallel can cause multiple threads to fight for a resource/container for some operation. When threads work in parallel to complete a task with containers, there is no indication when the thread reach the container and need to perform an operation on it. This causes problems when multiple threads are accessing the last attack at Castle Black!\n"; //Allow Next same place. When doing an insertion on a container with threads, we must ensure only 1 thread is capable of pushing to execute it or else threads may fight for control. By "Locking" the above iterationNightsWatchcontainer, we ensure only 1 thread accesses at any given time.Unlock();
To use a lock, you program must be working in parallel(ex #include <thread>) and should be completing something in parallel. You can find c++11 locks with #include <mutex>
}</pre>[[File:Gpulockwhat.PNG |thumb|center|700px| Mutex Example]]
Note that there can be problems with locks. If a thread is locked but it is never unlocked, any other threads will be forced to wait which may cause performance issues. Another problem is called "Dead Locking" where each thread may be waiting for another to unlock (and vice versa) and the program is forced to wait and wait .
Locks can solve the above issue but cause significant performance issues as the threads are forced to wait for each other before continuing. This performance hit is known as '''Lock Convoying'''.
[[File:DarthVector ThreadLock.PNG |thumb|center|400px600px| Performance issues inside STL]]
===Lock Convoying in TBB===
TBB attempts to mitigate the performance issue from parallel code when accessing or completing an operation on a container through its own containers such as concurrent_vector.
Through '''concurrent_vector''', every time an element is accessed/changed, a return of the index location is given. TBB promises that any time an element is pushed, it will always be in the same location, no matter if the size of the vector changes in memory. With a standard vector, when the size of the vector changes, the data is copied over. If any threads are currently traversing this vector when the size changes, any iterators may no longer be valid.This support also goes further for containers so that multiple threads can iterate through the container while another thread may be growing the container. An interesting catch though is that anything iterating may iterate over objects that are being constructed, ensuring construction and access remain synchronized.  [[File:Gpuconcur.PNG |thumb|center|600px| concurrent_vector use with multiple threads]]
TBB also provides its own versions of the mutex such as '''spin_mutex'' for when mutual exclusion is still required.
You can find more information on convoying and containers here: https://software.intel.com/en-us/blogs/2008/10/20/tbb-containers-vs-stl-performance-in-the-multi-core-age
==Business Point of View Comparison for STL and TBB==
{| class="wikitable collapsible collapsed" style="text-align: left;margin:0px;"
''I find your lack of grain disturbing''
===Which library is better depending the on the use case?===
One major aspect to look for when parallelizing a piece of code is the Cost- Benefit. Is it worth the time and effort to parallelize a part of your software only to get a small performance gain? Or is just faster to keep in serial? Many of these questions must be answered when deciding to parallelize your code or not.
TBB helps The real question is when should you parallelize your code, or to lower the cost of smaller performance benefit. Due to TBB requiring less effort to implement. Compared to other multi-threading librariesjust keep it serial?
*If are just trying TBB is for multi-threading and STL is for single threading workloads. The fastest known serial algorithm maybe difficult or impossible to parallelize a section of your code with a simple map, scan, or reduce pattern. Without much thought TBB has you covered.*When working with large collections of data TBB with its use of block range coupled with it algorithms makes it simpler to come up with solutions for the collection
TBB enables you Some aspects to specify logical parallelism instead of threads. It eliminates the time needed when developing a backbone look out for your code when working with threads. For quick and easy solutions for parallelize parallelizing your code TBB is the way to go.are;
STL stands on its own right*Overhead whether it maybe in communication, with its well-designed serial features. When trying to have more control near hardware levelidling, or you need to work near hardware levelload imbalance, the STL library with the threading libraries is the way to go. Though there are thread safety issues. synchronization, and excess computation
'''Conclusion''' *Efficiency which is the measure of processor utilization in a parallel program
TBB only gives you parallel solutions*Scalability the efficiency can be kept constant as the number of processing elements is increased, STL gives you provided that the foundations for many serial algorithms for sorting, searching, and a verity of containers.problem size is increased
*Correct Problem Size, when testing for efficiency, it may show poor efficiency if the problem size is too small. So, you would want to use serial instead, if the problem
size is always small. If you have a large problem size and has great efficiency, then parallel is the way to go
===Implementation Safety for TBB and STL ===
We are all human and we do make mistakes. Less mistakes done by developers will equal to less wasted time.
*TBB specifically makes concurrent_vector container not to support insert and erase operationsResource: http://ppomorsk. Only new items can be pushed back, and cannot be shrunksharcnet.ca/Lecture_2_d_performance.pdf
** This prevents developers to write bad code. If for example, we would allow insert and erase operations on concurrent_vector, it could cause a big performance hit. This performance hit can burden both iterating and growing operations which will not only make the concurrent containers in TBB unless, but also your program inefficient.
*As already stated most of === Identifying the STL containers are not thread safe. Though some operations in TBB containers are also not thread safe, like reserve() worries and clear() in concurrent_vector. responsibilities ===
*Thread Creation, terminating, and synchronizing, partitioning The increasing complexity of your code is managed by TBBa natural problem when working in parallel. This creates Knowing the responsibilities as in what you must worry about as a layer of safety on the programmer’s enddeveloper is key. When trying quickly implement parallel regions in your code, has they do not have or to deal with the threads themselves, making a developer less prone just to make mistakes in their keep your codeserial.
====STL and the Threading Libraries====
=== Identifying the worries and responsibilities ===The increasing complexity of your code is a natural problem when working in parallel. Knowing the responsibilities as in what you must worry about as a developer is key. When trying quickly implement parallel regions in your code, or to just to keep your code serial.====STL===='''If you are going to try to parallelize your code using STL coupled with the threading libraries this is what you must worry''': *Thread Creation, terminating, and synchronizing, partitioning, thread creation, and management must be handled by you. This increases the work load and the complexity. *The , the thread creation and overall resource for STL is managed by a combination of libraries. *Dividing collection of data is more of the problem when using the STL containers. Also mentioning that is not thread safe. *C++11 does not have any parallel algorithms. So, any common parallel patterns such as; map, scan, reduce, must be implemented by yourself, or by another library. But the latest STL Though C++17, will have some parallel algorithms like scan, map, and reduce.
The benefit of STL is due to the fact that you must manage the thread/ resources yourself which give you more control on the code, and fine tuning optimizations. Nonetheless, managing the thread yourself can be a double edge sword since with more control, it will take time implementing the code and the level of complexity will increase.
'''What you don’t need to worry about'''
*Making sorting, searching algorithms.
*Partitioning data.
*Array algorithms; like copying, assigning, and checking data
Note all algorithms is done in serial, and may not be thread safe
====TBBWorries and Responsibilities====*Thread Creation, terminating, and synchronizing, partitioning, thread creation, and management is managed by TBB. This make you need not to worry about the heavy constructs of threads which are close to the hardware level.
*Making a solution from close to hardware level allows Own Parallel algorithms (makes you need not to be flexible to worry about the solution you heavy constructs of threads that are wanting to make. But present in the major downside is the requirement lower levels of implementing the foundations first to make your solution workprogramming. It also simple map, scan, pipeline, or reduce TBB has the potential of making your program inefficient if not done correctly.you covered
TBB does have Parallel Algorithms support*Dividing collection of data, that has been already mentioned. the block range coupled with it algorithms makes it simpler to divide the data
'''Benefit'''
The downside of TBB is since much of the close to hard hardware management is done be hide the scenes, it makes you has a developer have less control on finetuning your program. Unlike how STL with the threading library allows you to do.
 
 
===Licensing===
TBB is dual-licensed as of September 2016
 
*COM license as part of suites products. Offers one year of technical support and products updates
 
*Apache v2.0 license for Open source code. Allows the user of the software the freedom to use the software for any purpose, to distribute it, to modify it, and to distribute modified versions of the software, under the terms of the license, without concern for royalties.
 
 
===Companies and Products that uses TBB===
*DreamWorks (DreamWorks Fur Shader)
 
*Blue Sky Studios (animation and simulation software)
 
*Pacific Northwest National Laboratory (Ultrasound products)
 
*More: https://software.intel.com/en-us/intel-tbb/reviews
32
edits