Changes

Jump to: navigation, search

GPU621/Group 1

2,127 bytes added, 21:21, 9 April 2023
no edit summary
The output demonstrates that using more than one thread (479 ms) results in a longer programme execution time. (355 ms). This is due to the fact that the two threads are incrementing two different counter variables, each of which is situated on a different cache line. Because of this, there is competition among the threads for access to the cache lines, which results in cache misses and slower performance. If the two counter variables were on the same cache line, the programme would probably execute more quickly when using multiple threads.
 
== Summary ==
 
False sharing is a performance problem that can happen in multi-threaded applications when different variables on the same cache line are accessed by different threads at the same time. Performance may suffer as a result of needless cache invalidations and cache line transfers. Finding the cache lines that are shared by several threads and then identifying the variables that cause the false sharing are the first steps in false sharing analysis. Profiling tools that track cache line accesses and spot conflicts in cache lines can be used for this.
 
There are a number of tactics that can be used to lessen the performance impact once the false sharing has been discovered. These consist of padding the impacted variables to ensure that they are on separate cache lines, rearranging the data structures to lessen the possibility of false sharing, or using thread-local storage to do away with the need for shared variables. An in-depth knowledge of the underlying hardware architecture and caching mechanisms is necessary to analyze false sharing effectively. Multi-threaded applications must also be thoughtfully designed and tested in order to reduce the risk of false sharing and other performance problems.
 
In conclusion, watch out for false sharing because it kills scalability. The general situation to be on the lookout for is when there are two objects or fields that are frequently accessed by different threads for reading or writing, at least one of the threads is performing writes, and the objects are close enough in memory that they fall on the same cache line. Utilize performance analysis tools and CPU monitors as detecting false sharing isn't always simple. Last but not least, you can prevent false sharing by lowering the frequency of updates to the variables that are shared but aren't actually. For instance, update local data rather than the shared variable. Additionally, by padding or aligning data on a cache line in such a way that it guarantees that no other data comes before or after a key object in the same cache line, you can ensure that a variable is completely unshared.
== Sources ==
25
edits

Navigation menu