Previously, we've seen some different ways to create a thread instance and how to manage them using the join()
and detach()
methods.
In this article, I will discuss about move semantics of thread ownership, mutexes, and other related topics.
Move Ownership of a Thread:
In C++ <thread>
by using std::move
function we can move the ownership of an executable thread between different thread instances/objects.
As std::thread
is a resource owning type, a thread of execution can only be moved across different instances but not copied to eliminate the chance of resource wastage.
Okay, enough talking! let's have a look at the example code,
1. void my_work(); //callable function
2. void my_other_work();
3. std::thread my_worker1(my_work); //construction of thread
//object specifying the task
4. std::thread my_worker2; //default constructed thread object
5. my_worker2 = std::move(my_worker1); //moving ownership of the
//executable thread
6. my_worker1 = std::thread(my_other_work); //construction of
//a new thread associated with temporary object
Code Example:
#include <iostream>
#include <thread>
int count = 0;
void my_work() {
while (count++ < 2) {
std::cout << "Doing my work!" << "\n";
}
std::cout << "My work is complete!" << "\n";
}
void my_other_work() {
while (count++ < 2) {
std::cout << "Doing my other work!" << "\n";
}
std::cout << "My other work is complete!" << "\n";
}
int main() {
std::thread my_worker1(my_work);
std::cout << "Thread Id of my_worker1 is " << my_worker1.get_id() << "\n";
std::thread my_worker2;
my_worker2 = std::move(my_worker1);
my_worker1 = std::thread(my_other_work);
std::cout << "Thread Id of my_worker1 is " << my_worker1.get_id() << "\n";
std::cout << "Thread Id of my_worker2 is " << my_worker2.get_id() << "\n";
my_worker1.join();
my_worker2.join();
return 0;
}
Code Output:
Thread Id of my_worker1 is 2
Thread Id of my_worker1 is 3
Thread Id of my_worker2 is 2
Doing my work!
My work is complete!
Doing my other work!
My other work is complete!
Explanation:
In our above code, at first in main
function we created a new thread my_worker1
using the std::thread
constructor, which is associated with the my_work
function.
Next, we created a default constructed thread object my_worker2
(i.e. it is not associated with any executable thread). In which we moved the my_worker1
thread object, using std::move
function.
Doing this transfers the ownership of the underlying thread from my_worker1
to my_worker2
, leaving my_worker1
in an invalid state.
After this, we have created a new thread that is associated with the std::thread
temporary object and assigned it to the my_worker1
thread object.
my_worker1 = std::thread(my_other_work)
When we call
std::thread
by only specifying the task, thestd::thread
constructor creates a temporary object and assigns the ownership of the newly created thread to that object. Now, this thread doesn't start it's execution immediately. We can start it by providing a thread object to it.
That's why I've used the move assignment operator to assign the returned thread object by thestd::thread
constructor, to the thread objectmy_worker1
. As you can see we don't need to callstd::move
for such assignment operations.
Here, I've used the get_id
function to print the underlying thread ID to show that the ownership is actually getting transferred.
Now one thing to note, the thread ID may be different on your system, as it's determined by the OS and can vary from one run to the next. The value of the thread ID is not guaranteed to be unique(i.e. thread ID's can be recycled) or to have any particular meaning. It is simply a numerical identifier that is used to distinguish one thread from another within the same program.
For some reason, if you don't want to use std::move
function, you can also use std::thread::swap
to swap the ownership of the underlying threads.
1. thread_name.swap(other_thread_name);
or,
2. std::swap(thread_name, other_thread_name);
Now, I guess we should move to our next topic.
Race Condition and Data Race:
Suppose you're facing such a scenario, where you've more than one concurrent thread, trying to access and modify the same shared data concurrently and depending on the order in which they access the data the final outcome differs. Now, this can cause reliability issues in the program, this is what we call a Race Condition.
Data Races occur when two or more threads access a common memory location simultaneously and at least one of them modifies the data resulting in undefined corrupt data at that memory location.
It is possible that Race Conditions can occur because of Data Race or vice versa.
Let's see this with an example.
Code Example:
#include <iostream>
#include <thread>
int shared_counter = 0;
void increment_counter() {
for (int i = 0; i < 100000; i++) {
//--critical region--//
shared_counter++;
}
}
int main() {
std::thread my_worker1(increment_counter);
std::thread my_worker2(increment_counter);
my_worker1.join();
my_worker2.join();
std::cout << "Final counter value: " << shared_counter << std::endl;
return 0;
}
Before I explain the code, it's important to know that when we call the std::thread
object specifying the task, the C++ threading API uses the lower level API (WinAPI for Windows) provided by the OS to launch a thread of execution.
In the process of which, the OS allocates the associated kernel resources and stack space and then it adds the new thread to the scheduler. After that, the thread scheduler executes those threads depending on it's various algorithms(FCFS, Time Slicing, Priority Based).
Different OS may use a different approach to schedule the threads. In case of Windows, it follows a priority-driven, preemptive scheduling system. Which means the threads scheduler can interrupt a running thread at any time and yield it's execution by putting it into a wait state to allow another thread with higher priority to execute.
Now that we have a bit of an idea about the underlying details let's get back to the code example.
Explanation:
So here, we have two concurrent threads my_worker1
and my_worker2
and both of them are working on the same shared_counter
variable and doing increment operations for 100k times each. Cool!
It would be a fair guess if you think that the output will be 200k. But, after running the code quite a few times the outcome is nowhere near the expected result(At least for me). Now, why is it happening?
As I previously said, these two threads are running concurrently, so there is a chance that the progress made by one thread can be overridden by another thread because of the asynchronized interleaving instructions and the data race.
Here is the visualization of what might have happened.
Even though each thread has done the increment operation once but the final value stored in shared_counter
is only 1, which is not the desired output.
Disclaimer: This visualization is only one of many scenarios that are possible, which means the order of these threads accessing the shared_counter
variable can change. Now, run the for loop for 200k times and it ends up with complete chaos...
As Race Condition depends on the relative ordering of the instructions, most of the time it won't cause any issues(i.e. Benign Race Condition) as long as the invariants are not broken. But a problematic race condition may result in a broken invariant (final value ≠ expected value).
So, it is evident that we need some kind of synchronization for our threads. This is where Mutual Exclusion or Mutex come into play!
Thread Mutexes:
std::mutex
class is a synchronization primitive that provides a locking mechanism which can be used to protect shared data from being simultaneously accessed by multiple threads.
Code Example:
#include <iostream>
#include <thread>
#include <mutex>
int shared_counter = 0;
std::mutex key;
void increment_counter() {
for (int i = 0; i < 100000; i++) {
key.lock(); // lock critical region
shared_counter++;
key.unlock(); // unlock critical region
}
}
int main() {
std::thread my_worker1(increment_counter);
std::thread my_worker2(increment_counter);
my_worker1.join();
my_worker2.join();
std::cout << "Final counter value: " << shared_counter << std::endl;
return 0;
}
Explanation:
As you can see, we have created a std::mutex
object called Key
. By using the lock()
and unlock()
member functions of the std::mutex
class, a thread (given that it reaches the mutex first) will be able to lock and unlock the shared_counter
variable.
While the critical region is locked, other threads outside will be blocked until the critical region gets unlocked. Thereby the data race between these two threads will be avoided.
Blocking a thread in some cases may not be an optimal approach. To overcome that we can also use std::mutex::try_lock()
rather than std::mutex::lock()
.
void increment_counter() {
for (int i = 0; i < 100000; i++) {
if(key.try_lock()) { // try to lock critical region
shared_counter++;
key.unlock(); // unlock critical region
}
}
}
What this would do is instead of putting the thread in a block queue, it returns the boolean value false
, allowing the thread to try again later.
Here, goes the visualization of what may happen if we introduce try_lock()
in our code.
“With greater flexibility comes greater responsibility.”
Although, try_lock()
is solving a key issue here but it may introduce unexpected code behaviours. Also if not implemented properly it may lead to a livelock situation where two or more threads are constantly trying and failing to acquire the lock leading to wasted CPU cycles.
That's why the C++ std::thread
library provides a special kind of mutex class called std::timed_mutex
which comes with the std::timed_mutex::try_lock_for()
and std::timed_mutex::try_lock_until()
member functions.
Deadlock and Livelock:
A deadlock situation occurs when two or more blocked threads are waiting for each other to release a resource such as a mutex, which never gets released, resulting in the threads stopping their execution and waiting for an indefinite amount of time.
As bad as it sounds, deadlocks can easily hang our applications for an indefinite amount of time unless some external intervention occurs.
Let's try to recreate a deadlock situation,
Code Example:
#include <iostream>
#include <thread>
#include <mutex>
#include <vector>
std::mutex cout_key;
void my_work() {
cout_key.lock();
std::cout << "Thread" << '[' << std::this_thread::get_id() << ']' << " acquired the lock" << '\n';
//cout_key.unlock();
}
int main() {
std::vector<std::thread> my_threads;
for(int i=0; i<2; i++) {
my_threads.emplace_back(my_work);
}
for(auto& i: my_threads) {
i.join();
}
return 0;
}
One of the simplest kinds of deadlock can occur when a lock is acquired by a thread but never released.
In this example, the thread that locks the mutex first will print the output without releasing the lock, causing the other thread to be blocked upon attempting to acquire the lock and unable to proceed further.
A livelock situation, on the other hand, is similar to the deadlock situation but in this case, instead of the threads getting blocked, they constantly run on an active checking loop consuming CPU resources without making any real progress.
Code Example:
#include <iostream>
#include <thread>
#include <mutex>
#include <vector>
std::mutex cout_key;
void my_work() {
while(true) {
if(cout_key.try_lock()) {
std::cout << "Thread" << '[' << std::this_thread::get_id() << ']' << " acquired the lock\n";
//cout_key.unlock();
return;
}
}
}
int main() {
std::vector<std::thread> my_threads;
for(int i=0; i<2; i++) {
my_threads.emplace_back(my_work);
}
for(auto& i: my_threads) {
i.join();
}
return 0;
}
Similar to the previous scenario, the thread that acquires the cout_key
mutex first will print the output without releasing it. As a result, the second thread will attempt to acquire the mutex and enter an infinite loop.
Whether it is a deadlock or a livelock, it is indeed one of the worst problems to face while developing any multithreaded applications.
To avoid situations like this, we can follow some guidelines, such as keeping the critical section short, following a specific order of mutexes, and putting a time bound on the waiting part.
We can also use some of the C++ features while working with locks, such as using RAII (i.e. scoped bound resource management) like std::lock_guard
, or using std::scoped_lock
to acquire more than one lock.
"Prevention is better than a cure"
In conclusion, threads can be a bit of a hassle to debug, and that's why it is important to follow the best practices from the very beginning so that at the end it doesn't become unmanageable and time-consuming to identify and fix bugs.
Top comments (0)