Part III: Advanced Primitives
This section covers advanced synchronization primitives: atomics for lock-free operations, condition variables for efficient waiting, and shared locks for reader/writer patterns.
Prerequisites
-
Completed Part II: Synchronization
-
Understanding of mutexes, lock guards, and deadlocks
Atomics: Lock-Free Operations
For operations on individual values, mutexes might be overkill. Atomic types provide lock-free thread safety for single variables.
An atomic operation completes entirely before any other thread can observe its effects. There is no intermediate state.
#include <iostream>
#include <thread>
#include <atomic>
std::atomic<int> counter{0};
void increment_many_times()
{
for (int i = 0; i < 100000; ++i)
++counter; // atomic increment
}
int main()
{
std::thread t1(increment_many_times);
std::thread t2(increment_many_times);
t1.join();
t2.join();
std::cout << "Counter: " << counter << "\n";
return 0;
}
No mutex, no lock guard, yet the result is always 200,000. The std::atomic<int> ensures that increments are indivisible.
When to Use Atomics
Atomics work best for single-variable operations: counters, flags, simple state. They are faster than mutexes when contention is low. But they cannot protect complex operations involving multiple variables—for that, you need mutexes.
Common atomic types include:
-
std::atomic<bool>— Thread-safe boolean flag -
std::atomic<int>— Thread-safe integer counter -
std::atomic<T*>— Thread-safe pointer -
std::atomic<std::shared_ptr<T>>— Thread-safe shared pointer (C++20)
Any trivially copyable type can be made atomic.
Atomic Operations
std::atomic<int> value{0};
value.store(42); // atomic write
int x = value.load(); // atomic read
int old = value.exchange(10); // atomic read-modify-write
value.fetch_add(5); // atomic addition, returns old value
value.fetch_sub(3); // atomic subtraction, returns old value
// Compare-and-swap (CAS)
int expected = 10;
bool success = value.compare_exchange_strong(expected, 20);
// If value == expected, sets value = 20 and returns true
// Otherwise, sets expected = value and returns false
Condition Variables: Efficient Waiting
Sometimes a thread must wait for a specific condition before proceeding. You could loop, repeatedly checking:
// Inefficient busy-wait
while (!ready)
{
std::this_thread::sleep_for(std::chrono::milliseconds(100));
}
This works but wastes CPU cycles and introduces latency. Condition variables provide efficient waiting.
A condition variable allows one thread to signal others that something has changed. Waiting threads sleep until notified, consuming no CPU.
#include <iostream>
#include <thread>
#include <mutex>
#include <condition_variable>
std::mutex mtx;
std::condition_variable cv;
bool ready = false;
void worker()
{
std::unique_lock<std::mutex> lock(mtx);
cv.wait(lock, []{ return ready; }); // wait until ready is true
std::cout << "Worker proceeding!\n";
}
void signal_ready()
{
{
std::lock_guard<std::mutex> lock(mtx);
ready = true;
}
cv.notify_one(); // wake one waiting thread
}
int main()
{
std::thread t(worker);
std::this_thread::sleep_for(std::chrono::seconds(1));
signal_ready();
t.join();
return 0;
}
The worker thread calls cv.wait(), which atomically releases the mutex and suspends the thread. When signal_ready() calls notify_one(), the worker wakes up, reacquires the mutex, checks the condition, and proceeds.
The Predicate
The lambda []{ return ready; } is the predicate. wait() will not return until this evaluates to true. This guards against spurious wakeups—rare events where a thread wakes without notification. Always use a predicate.
Notification Methods
-
notify_one()— Wake a single waiting thread -
notify_all()— Wake all waiting threads
Use notify_one() when only one thread needs to proceed (e.g., producer-consumer with single consumer). Use notify_all() when multiple threads might need to check the condition (e.g., broadcast events, shutdown signals).
Shared Locks: Readers and Writers
Consider a data structure that is read frequently but written rarely. A regular mutex serializes all access—but why block readers from each other? Multiple threads can safely read simultaneously; only writes require exclusive access.
Shared mutexes support this pattern:
#include <iostream>
#include <thread>
#include <shared_mutex>
#include <vector>
std::shared_mutex rw_mutex;
std::vector<int> data;
void reader(int id)
{
std::shared_lock<std::shared_mutex> lock(rw_mutex); // shared access
std::cout << "Reader " << id << " sees " << data.size() << " elements\n";
}
void writer(int value)
{
std::unique_lock<std::shared_mutex> lock(rw_mutex); // exclusive access
data.push_back(value);
std::cout << "Writer added " << value << "\n";
}
Lock Types
std::shared_lock-
Acquires a shared lock—multiple threads can hold shared locks simultaneously.
std::unique_lock(on shared_mutex)-
Acquires an exclusive lock—no other locks (shared or exclusive) can be held.
Behavior
-
While any reader holds a shared lock, writers must wait
-
While a writer holds an exclusive lock, everyone waits
-
Multiple readers can proceed simultaneously
This pattern maximizes concurrency for read-heavy workloads. Use std::shared_mutex when reads vastly outnumber writes.
Example: Thread-Safe Cache
#include <shared_mutex>
#include <unordered_map>
#include <string>
#include <optional>
class ThreadSafeCache
{
std::unordered_map<std::string, std::string> cache_;
mutable std::shared_mutex mutex_;
public:
std::optional<std::string> get(std::string const& key) const
{
std::shared_lock lock(mutex_); // readers can proceed in parallel
auto it = cache_.find(key);
if (it != cache_.end())
return it->second;
return std::nullopt;
}
void put(std::string const& key, std::string const& value)
{
std::unique_lock lock(mutex_); // exclusive access for writing
cache_[key] = value;
}
};
Multiple threads can call get() simultaneously without blocking each other. Only put() requires exclusive access.
You have now learned about atomics, condition variables, and shared locks. In the next section, you will explore communication patterns: futures, promises, async, and practical concurrent patterns.