Part II: Synchronization
This section introduces the dangers of shared data access and the synchronization primitives that protect against them. You will learn about race conditions, mutexes, lock guards, and deadlocks.
Prerequisites
-
Completed Part I: Foundations
-
Understanding of threads and their lifecycle
The Danger: Race Conditions
When multiple threads read the same data, all is well. But when at least one thread writes while others read or write, you have a data race. The result is undefined behavior—crashes, corruption, or silent errors.
Consider this code:
#include <iostream>
#include <thread>
int counter = 0;
void increment_many_times()
{
for (int i = 0; i < 100000; ++i)
++counter;
}
int main()
{
std::thread t1(increment_many_times);
std::thread t2(increment_many_times);
t1.join();
t2.join();
std::cout << "Counter: " << counter << "\n";
return 0;
}
Two threads, each incrementing 100,000 times. You would expect 200,000. But run this repeatedly and you will see different results—180,000, 195,327, maybe occasionally 200,000. Something is wrong.
The ++counter operation looks atomic—indivisible—but it is not. It actually consists of three steps:
-
Read the current value
-
Add one
-
Write the result back
Between any of these steps, the other thread might execute its own steps. Imagine both threads read counter when it is 5. Both add one, getting 6. Both write 6 back. Two increments, but the counter only went up by one. This is a lost update, a classic race condition.
The more threads, the more opportunity for races. The faster your processor, the more instructions execute between context switches, potentially hiding the bug—until one critical day in production.
Mutual Exclusion: Mutexes
The solution to data races is mutual exclusion: ensuring that only one thread accesses shared data at a time.
A mutex (mutual exclusion object) is a lockable resource. Before accessing shared data, a thread locks the mutex. If another thread already holds the lock, the requesting thread blocks until the lock is released. This serializes access to the protected data.
#include <iostream>
#include <thread>
#include <mutex>
int counter = 0;
std::mutex counter_mutex;
void increment_many_times()
{
for (int i = 0; i < 100000; ++i)
{
counter_mutex.lock();
++counter;
counter_mutex.unlock();
}
}
int main()
{
std::thread t1(increment_many_times);
std::thread t2(increment_many_times);
t1.join();
t2.join();
std::cout << "Counter: " << counter << "\n";
return 0;
}
Now the output is always 200,000. The mutex ensures that between lock() and unlock(), only one thread executes. The increment is now effectively atomic.
But there is a problem with calling lock() and unlock() directly. If code between them throws an exception, unlock() never executes. The mutex stays locked forever, and any thread waiting for it blocks eternally—a deadlock.
Lock Guards: Safety Through RAII
C++ has a powerful idiom: RAII (Resource Acquisition Is Initialization). The idea: acquire resources in a constructor, release them in the destructor. Since destructors run even when exceptions are thrown, cleanup is guaranteed.
Lock guards apply RAII to mutexes:
#include <iostream>
#include <thread>
#include <mutex>
int counter = 0;
std::mutex counter_mutex;
void increment_many_times()
{
for (int i = 0; i < 100000; ++i)
{
std::lock_guard<std::mutex> lock(counter_mutex);
++counter;
// lock is automatically released when it goes out of scope
}
}
The std::lock_guard locks the mutex on construction and unlocks it on destruction. Even if an exception is thrown, the destructor runs and the mutex is released. This is the correct way to use mutexes.
std::scoped_lock (C++17)
Since C++17, std::scoped_lock is preferred. It works like lock_guard but can lock multiple mutexes simultaneously, avoiding a class of deadlock:
std::scoped_lock lock(counter_mutex); // C++17
std::unique_lock
For more control, use std::unique_lock. It can be unlocked before destruction, moved to another scope, or created without immediately locking:
std::unique_lock<std::mutex> lock(some_mutex, std::defer_lock);
// mutex not yet locked
lock.lock(); // lock when ready
// ... do work ...
lock.unlock(); // unlock early if needed
// ... do other work ...
// destructor unlocks again if still locked
std::unique_lock is more flexible but slightly more expensive than std::lock_guard. Use the simplest tool that does the job.
The Deadlock Dragon
Mutexes solve data races but introduce a new danger: deadlock.
Imagine two threads and two mutexes. Thread A locks mutex 1, then tries to lock mutex 2. Thread B locks mutex 2, then tries to lock mutex 1. Each thread holds one mutex and waits for the other. Neither can proceed. The program freezes.
std::mutex mutex1, mutex2;
void thread_a()
{
std::lock_guard<std::mutex> lock1(mutex1);
std::lock_guard<std::mutex> lock2(mutex2); // blocks, waiting for B
// ...
}
void thread_b()
{
std::lock_guard<std::mutex> lock2(mutex2);
std::lock_guard<std::mutex> lock1(mutex1); // blocks, waiting for A
// ...
}
If both threads run and each acquires its first mutex before the other acquires the second, deadlock occurs.
Preventing Deadlock
The simplest prevention: always lock mutexes in the same order. If every thread locks mutex1 before mutex2, no cycle can form.
When you need to lock multiple mutexes and cannot guarantee order, use std::scoped_lock:
void safe_function()
{
std::scoped_lock lock(mutex1, mutex2); // locks both atomically
// ...
}
std::scoped_lock uses a deadlock-avoidance algorithm internally, acquiring both mutexes without risk of circular waiting.
Deadlock Prevention Rules
-
Lock in consistent order — Define a global ordering for mutexes and always lock in that order
-
Use std::scoped_lock for multiple mutexes — Let the library handle deadlock avoidance
-
Hold locks for minimal time — Reduce the window for contention
-
Avoid nested locks when possible — Simpler designs prevent deadlock by construction
You have now learned about race conditions, mutexes, lock guards, and deadlocks. In the next section, you will explore advanced synchronization primitives: atomics, condition variables, and shared locks.