Executors and Execution Contexts
This section explains executors and execution contexts—the mechanisms that control where and how coroutines execute.
Prerequisites
-
Completed Launching Coroutines
-
Understanding of
run_asyncandrun
The Executor Concept
An executor is an object that can schedule work for execution. In Capy, executors must provide two methods:
concept Executor = requires(E ex, coro h) {
{ ex.dispatch(h) } -> std::same_as<coro>;
{ ex.post(h) } -> std::same_as<void>;
{ ex.context() } -> std::convertible_to<execution_context&>;
};
dispatch() vs post()
Both methods schedule a coroutine for execution, but with different semantics:
dispatch(h)-
May execute
hinline if the current thread is already associated with the executor. Returns a coroutine handle—eitherhif execution was deferred, orstd::noop_coroutine()ifhwas executed immediately. This enables symmetric transfer optimization. post(h)-
Always queues
hfor later execution. Never executes inline. Returns void. Use when you need guaranteed asynchrony.
executor_ref: Type-Erased Executor
executor_ref wraps any executor in a type-erased container, allowing code to work with executors without knowing their concrete type:
void schedule_work(executor_ref ex, coro h)
{
ex.post(h); // Works with any executor
}
int main()
{
thread_pool pool;
executor_ref ex = pool.get_executor(); // Type erasure
schedule_work(ex, some_coroutine);
}
executor_ref stores a reference to the underlying executor—the original executor must outlive the executor_ref.
thread_pool: Multi-Threaded Execution
thread_pool manages a pool of worker threads that execute coroutines concurrently:
#include <boost/capy/ex/thread_pool.hpp>
int main()
{
// Create pool with 4 threads
thread_pool pool(4);
// Get an executor for this pool
auto ex = pool.get_executor();
// Launch work on the pool
run_async(ex)(my_task());
// pool destructor waits for all work to complete
}
execution_context: Base Class
execution_context is the base class for execution contexts. It provides:
-
Frame allocator access via
get_frame_allocator() -
Service infrastructure for extensibility
Custom execution contexts inherit from execution_context:
class my_context : public execution_context
{
public:
// ... custom implementation
my_executor get_executor();
};
strand: Serialization Without Mutexes
A strand ensures that handlers are executed in order, with no two handlers executing concurrently. This eliminates the need for mutexes when all access to shared data goes through the strand.
#include <boost/capy/ex/strand.hpp>
class shared_resource
{
strand<thread_pool::executor_type> strand_;
int counter_ = 0;
public:
explicit shared_resource(thread_pool& pool)
: strand_(pool.get_executor())
{
}
task<int> increment()
{
// All increments are serialized through the strand
co_return co_await run(strand_)(do_increment());
}
private:
task<int> do_increment()
{
// No mutex needed—strand ensures exclusive access
++counter_;
co_return counter_;
}
};
How Strands Work
The strand maintains a queue of pending work. When work is dispatched:
-
If no other work is executing on the strand, the new work runs immediately
-
If other work is executing, the new work is queued
-
When the current work completes, the next queued item runs
This provides logical single-threading without blocking physical threads.
Single-Threaded vs Multi-Threaded Patterns
Single-Threaded
For single-threaded applications, use a context with one thread:
thread_pool single_thread(1);
auto ex = single_thread.get_executor();
// All work runs on the single thread
Reference
| Header | Description |
|---|---|
|
The Executor concept definition |
|
Type-erased executor wrapper |
|
Multi-threaded execution context |
|
Base class for execution contexts |
|
Serialization primitive |
You have now learned about executors, execution contexts, thread pools, and strands. In the next section, you will learn about the IoAwaitable protocol that enables context propagation.