Changes

Jump to: navigation, search

Team Hortons

5,215 bytes added, 13:44, 3 December 2017
Parallelism with Rust
== Locks (Mutex) ==
[text]Rust language also supports shared-state concurrency which allows two or more processes have some shared state (data) between them they can write to and read from. Sharing data between multiple threads can get pretty complicated since it introduces the strong hazard of race conditions. Imagine a situation when one thread grabs some data and attempts to change it, while another thread is starting to read the same value, there’s no way to predict if latter one retrieves updated data or if it gets hold of the old value. Thus shared-state concurrency has to deliver some implementation of guard and synchronized access mechanism.  The access to shared state (critical section) and synchronization methods are typically implemented through ‘mutex’. Mutual exclusion principles (mutex) provide ways to prevent race conditions by only allowing one thread to reach out some data at any given time. If one thread wants to read/write some piece of data, it must first give a signal and then, once permitted, acquire the mutex’s lock. The lock is a special data structure that can keep track of who currently has exclusive access to the data. You can think of locks as dedicated mechanism that guards critical section. Since Rust Lang supports ownership principles, the threads are inaccessible to each other automatically and only one thread can access data at any given time time.  The snippet below demonstrates how you can create and access mutex in rustlang:  <source lang="rust">fn use_lock(mutex: &Mutex<Vec<i32>>) { let mut guard = lock(mutex); // take ownership let numbers = access(&mut guard); // borrow access numbers.push(42); // change the data} </source> Mutex construct is generic and it accepts the piece of shared data protected by the lock. It is important to remember that the ownership of that data is transferred into the mutex structure at its creation. The lock function will attempt to acquire the lock by blocking the local thread until it is available to obtain the mutex. The data is automatically unlocked once the return value of lock() function gets released (at the end of the function scope) so there is no need to release the mutex lock manually. 
== RC and Atomic ==
In Rust, some data types are defined as “thread safe” while others are not. For example, Rc<T> type, which is Rust’s own implementation of “smart pointer”, is considered to be unsafe to share across threads. This type keeps track of number of references and increments/decrements count each time a new reference is created or old one gets destroyed. Rc<T> does not enforce any mechanisms that make sure that changes to the reference counter value can’t be interrupted by another thread. The snippet below demonstrates the issue an results in compiler error: <source lang="rust">let mutex = Rc::new(Mutex::new(2));let mut handles = vec![text]; for _ in 0..4 { let mutex = mutex.clone(); let handle = thread::spawn(move || { let mut num = mutex.lock().unwrap();  *num *= 2; println!("Intermediate Result : {}", *num); }); handles.push(handle);}  for handle in handles { handle.join().unwrap(); }  println!("Final Result: {}", *mutex.lock().unwrap());</source> This will produce an error : <source lang="rust">error[E0277]: the trait bound `std::rc::Rc<std::sync::Mutex<i32>>: std::marker::Send` is not satisfied in `[closure@src/main.rs:11:32: 16:6 mutex:std::rc::Rc<std::sync::Mutex<i32>>]` --> src/main.rs:11:18 |11 | let handle = thread::spawn(move || { | ^^^^^^^^^^^^^ `std::rc::Rc<std::sync::Mutex<i32>>` cannot be sent between threads safely | = help: within `[closure@src/main.rs:11:32: 16:6 mutex:std::rc::Rc<std::sync::Mutex<i32>>]`, the trait `std::marker::Send` is not implemented for `std::rc::Rc<std::sync::Mutex<i32>>` = note: required because it appears within the type `[closure@src/main.rs:11:32: 16:6 mutex:std::rc::Rc<std::sync::Mutex<i32>>]` = note: required by `std::thread::spawn` </source> Luckily for us, Rust ships another thread-safe implementation of ‘smart pointer’ called Arc<T> which implements reference counting via atomic operations. It is important to note that in serial code, it makes much more sense to use standard Rc<T> over Arc<T> type since the latter structure is more expensive performance-wise. <source lang="rust">use std::sync::{Arc, Mutex};use std::thread; fn main() {let mutex = Arc::new(Mutex::new(2));let mut handles = vec![]; for _ in 0..4 { let mutex = mutex.clone(); let handle = thread::spawn(move || { let mut num = mutex.lock().unwrap();  *num *= 2; println!("Intermediate Result : {}", *num); }); handles.push(handle);}  for handle in handles { handle.join().unwrap(); }  println!("Final Result: {}", *mutex.lock().unwrap()); } </source> This will produce the expected result : <source>Intermediate Result : 4Intermediate Result : 8Intermediate Result : 16Intermediate Result : 32Final Result: 32</source>
This time the code worked. This example was very simple and not too impressive. Much more complicated algorithms can be implemented with the Mutex<T>.
== Group Members ==
68
edits

Navigation menu