Skip to content

Continuations

Continuations are the main primitive to assemble asynchronous task graphs. An instance of basic_future can have continuations, which allows us to create tasks chains.

Say we want to execute the following sequence of asynchronous tasks:

graph LR subgraph Async A --> B --> C end Main --> A C --> End Main --> End

Achieving that with the function then is as simple as:

cfuture<int> A = async([]() { return 65; });
cfuture<char> B = A.then([](int v) {
    return static_cast<char>(v);
});
cfuture<void> C = then(B, [](char c) { assert(c == 'A'); });
C.wait();

All instances of basic_future that support lazy continuations also have the member function basic_future::then. However, the free function then is used to create a continuation to any future, which is itself another future value.

cfuture<int> f1 = async([]() -> int { return 42; });
cfuture<void> f2 = then(f1, [](int x) {
    // Another task in the executor
    assert(x == 42);
});
std::future<int> f3 = std::async([]() -> int { return 63; });
cfuture<void> f4 = then(f3, [](int x) {
    // Another task in the default executor
    assert(x == 63);
});

Continuation Function

Only futures with lazy continuations have a basic_future::then member function. For the general case, we should prefer the free function then.

The free function then allows us to create task graphs with future continuations regardless of underlying support for lazy continuations.

Better future adaptors

This library includes:

  • Large set of composition operations, such as when_all and when_any
    • Easier composition of task graphs
    • Syntax closer to the existing future types users are used to
  • The future adaptors still work for existing future types
  • Adaptors are also provided to facilitate the creation of cyclic task graphs
  • Continuations are attached to old future types with a single polling future
  • Integrations with Asio are provided, such as completion tokens and async IO operations.

Lazy continuations

The function then changes its behavior according to the traits is_continuable defined for the previous future type. If a previous future type supports lazy continuations, the next task is attached to the previous task with basic_future::then.

If the next future type [is_deferred], then no lazy continuations need to be involved. A new deferred task waits for the previous task to be ready inline. Only when the previous task is ready the continuation task will be launched to the executor.

auto f5 = schedule([]() -> int { return 63; });
auto f6 = then(f5, [](int x) { assert(x == 63); });

In both cases, there's no polling involved. Polling is only necessary for (i) eager futures, (ii) that don't support continuations, and (iii) are potentially not ready. The last criteria eliminates future types such as vfuture generated by make_ready_future.

In general, such futures types should not be used when we require continuations. However, to enable generic algorithms, the function then also works for these future types and will automatically launch polling tasks to wait for their results.

std::future<int> f3 = std::async([]() -> int { return 63; });
cfuture<void> f4 = then(f3, [](int x) {
    // Another task in the default executor
    assert(x == 63);
});

Executors

It's important to note continuations are never executed inline. Although common patterns used in javascript for callback functions are still possible, future continuations are always posted to the executor again. The default future objects returned by futures::async carry light handles to their execution contexts, through which continuation tasks can be launched by default. If a future object carries no executor, the default executor is used.

However, if the continuation should be launched with another executor, both the member function basic_future::then and the free function then support custom executors for the continuation task.

cfuture<int> f7 = async([] { return 2; });
futures::thread_pool pool(1);
auto ex = pool.get_executor();
cfuture<int> f8 = then(ex, f7, [](int v) { return v * 2; });

When this parameter is provided, the task will continue in another executor.

Operators

The operator >> is defined as a convenience to assemble large task graphs including continuations.

cfuture<int> f9 = f8 >> [](int x) {
    return x * 2;
};
auto inline_executor = make_inline_executor();
auto f10 = f9 >> inline_executor % [](int x) {
    return x + 2;
};
assert(f10.get() == 10);

The types accepted by these operators are limited to those matching the future concept and callables that are valid as continuations to the future instance.

Continuation unwrapping

By default, continuations attempt to receive the previous future object as their input. This allows the continuation to examine the state of the previous future object before deciding how to continue.

cfuture<void> f1 = async([]() { task_that_might_fail(); });
cfuture<void> f2 = then(f1, [](cfuture<void> f) {
    if (!f.get_exception_ptr()) {
        handle_success();
    } else {
        handle_error();
    }
});

However, continuations involve accessing the future object from the previous task. This means continuation chains where the previous task is derived from a number of container adaptors can easily become verbose and error-prone. For instance, consider a very simple continuation to a task that depends on 3 other futures objects.

This is the task for which we need a continuation.

auto f1 = async([]() {
    return std::make_tuple(
        make_ready_future(1),
        make_ready_future(2.0),
        make_ready_future<std::string>("3"));
});

And this is how verbose the continuation looks like without unwrapping:

cfuture<void> f2 = then(
    f1,
    [](cfuture<std::tuple<
           vfuture<int>,
           vfuture<double>,
           vfuture<std::string>>> f) {
    // retrieve futures
    auto t = f.get();
    vfuture<int> fa = std::move(std::get<0>(t));
    vfuture<double> fb = std::move(std::get<1>(t));
    vfuture<std::string> fc = std::move(std::get<2>(t));
    // get their values
    int a = fa.get();
    double b = fb.get();
    std::string c = fc.get();
    // use values
    assert(a == 1);
    assert(b == 2.0);
    assert(c == "3");
    });

Although this pattern could be slightly simplified with more recent C++ features, such as structured bindings, this pattern is unmaintainable. To simplify this process, the function then accepts continuations that expect the unwrapped result from the previous task.

For instance, consider the following continuation function:

cfuture<void> f1 = async([]() {
    // Task
    long_task();
});
cfuture<int> f2 = f1 >> []() { return 6; };
assert(f2.get() == 6);

The continuation function requires no parameters. This means it only needs the previous future to be ready to be executor, but it does not require to access the previous future object so a parameter of the previous type would be of no use here. This also removes the necessity of marking the unused future object with attributes such as [[maybe_unused]].

Exceptions

If the previous task fails and its exception would be lost in the unwrapping process, the exception is automatically propagated to the following task.

cfuture<void> f1 = async([]() { task_that_might_fail(); });

cfuture<int> f2 = f1 >> []() {
    return 6;
};

if (!f2.get_exception_ptr()) {
    handle_success_vals(f2.get());
} else {
    handle_error();
}

With future adaptors, the exception information is still propagated to the unwrapped continuation future with the underlying future objects being unwrapped. Thus, continuations without unwrapping are only necessary when (i) the unwrapped version would lose the relevant exception information, and (ii) we need a different behavior for the continuation. This typically happens when the continuation task contains some logic allowing us to recover from the error.

Value unwrapping

The simplest form of unwrapping is sending the internal future value directly to the continuation function.

auto f1 = async([]() { return 6; });
auto f2 = f1 >> [](int x) {
    return x * 2;
};
assert(f2.get() == 12);

This allows the continuation function to worry only about the internal value type int instead of the complete future type cfuture<int>. This also makes the algorithm easier to generalize for alternative future types.

If the previous future also contains a future, we can double unwrap the value to the next task:

auto f1 = async([]() { return make_ready_future(6); });
auto f2 = f1 >> [](int x) {
    return x * 2;
};
assert(f2.get() == 12);

Tuples unwrapping

Tuple unwrapping becomes useful as a simplified way for futures to return multiple values to its continuations.

auto f1 = async([]() { return make_ready_future(6); });
auto f2 = f1 >> [](int x) {
    return std::make_tuple(x * 1, x * 2, x * 3);
};
cfuture<int> f3 = f2 >> [](int a, int b, int c) {
    return a * b * c;
};
assert(f3.get() == 6 * 1 * 6 * 2 * 6 * 3);

The tuple components are also double unwrapped if necessary:

auto f1 = async([]() { return make_ready_future(6); });
auto f2 = f1 >> [](int x) {
    return std::make_tuple(
        make_ready_future(1 * x),
        make_ready_future(2 * x),
        make_ready_future(3 * x));
};
auto f3 = f2 >> [](int a, int b, int c) {
    return a + b + c;
};
assert(f3.get() == 1 * 6 + 2 * 6 + 3 * 6);

In this case, without unwrapping, the continuation would require a cfuture<std::tuple<vfuture<int>, vfuture<int>, vfuture<int>>> as its first parameter.

Unwrapping conjunctions

Double tuple unwrapping is one of the most useful types of future unwrapping for continuation functions of conjunctions. When we wait for a conjunction of futures, the return value is represented as a tuple of all future objects that got ready. Double tuple unwrapping allows us to handle the results is a pattern that is more manageable:

auto f1 = async([]() { return 1; });
auto f2 = async([]() { return 2; });
auto f3 = async([]() { return 3; });
auto f4 = async([]() { return 4; });
auto f5 = when_all(f1, f2, f3, f4);
auto f6 = f5 >> [](int a, int b, int c, int d) {
    return a + b + c + d;
};
assert(f6.get() == 1 + 2 + 3 + 4);

This allows the continuation function to worry only about the internal value type int instead of the complete future type cfuture<int>. This also makes the algorithm easier to generalize for alternative future types.

Unwrapping disjunctions

Future disjunctions are represented with instances of when_any_result. Special unwrapping functions are defined for these objects. This simplest for of unwrapping for disjunctions is the index of the ready future and the previous sequence of future objects.

cfuture<int> f1 = async([]() { return 1; });
cfuture<int> f2 = async([]() { return 2; });
when_any_future<std::tuple<cfuture<int>, cfuture<int>>>
    f3 = when_any(f1, f2);
auto f4 = f3 >>
          [](std::size_t idx,
             std::tuple<cfuture<int>, cfuture<int>> prev) {
    if (idx == 0) {
        return std::get<0>(prev).get();
    } else {
        return std::get<1>(prev).get();
    }
};
int r = f4.get();
assert(r == 1 || r == 2);

This can still be as verbose as wrapped tuples. However, we might still want to have access to each individual future as we know only one of them is ready when the continuation starts. So the second option is exploding the tuple of futures into the continuation parameters.

auto f1 = async([]() { return 1; });
auto f2 = async([]() { return 2; });
auto f3 = when_any(f1, f2);
auto f4 = f3 >>
          [](std::size_t idx, cfuture<int> f1, cfuture<int> f2) {
    if (idx == 0) {
        return f1.get();
    } else {
        return f2.get();
    }
};
int r = f4.get();
assert(r >= 1 && r <= 2);

This pattern allows us to do something about f2 when f1 is ready and vice-versa. It implies we want to continue as soon as there are results available but the results from unfinished tasks should not be discarded.

Very often, only one of the objects is really necessary and the meaning of what they store is homogenous. For instance, this is the case when we attempt to connect to a number of servers and want to continue with whatever server replies first. In this case, unfinished futures can be discarded, and we only need the finished task to continue.

auto f1 = async([]() { return 1; });
auto f2 = async([]() { return 2; });
auto f3 = when_any(f1, f2);
auto f4 = f3 >> [](cfuture<int> f) {
    return f.get();
};
int r = f4.get();
assert(r >= 1 && r <= 2);

If the previous futures are stoppable, the adaptor will request other tasks to stop. If the tasks are homogeneous, this means we can also unwrap the underlying value of the finished task.

auto f1 = async([]() { return 1; });
auto f2 = async([]() { return 2; });
auto f3 = when_any(f1, f2);
auto f4 = f3 >> [](int v) {
    return v * 2;
};
int r = f4.get();
assert(r == 2 || r == 4);

Summary

The following table describes all unwrapping functions by their priority:

Future output Continuation input Inputs
future<R> future<R> 1
future<R> `` 0
future<R> R 1
future<tuple<future<T1>, future<T2>, ...>> future<T1>, future<T2> ... N
future<tuple<future<T1>, future<T2>, ...>> T1, T2 ... N
future<vector<future<R>>> vector<R> 1
future<when_any_result<tuple<future<T1>, future<T2>, ...>>> size_t, tuple<future<T1>, future<T2>, ...> 2
future<when_any_result<tuple<future<T1>, future<T2>, ...>>> size_t, future<T1>, future<T2>, ... N + 1
future<when_any_result<tuple<future<R>, future<R>, ...>>> future<R> 1
future<when_any_result<vector<future<R>>>> future<R> 1
future<when_any_result<tuple<future<R>, future<R>, ...>>> R 1
future<when_any_result<vector<future<R>>>> R 1

Note that types are very important here. Whenever the continuation has the same number of arguments for the same future output, a template function or a lambda using auto would be ambiguous.

cfuture<int> f1 = async([]() { return 1; });
auto f2 = f1 >> [](auto f) -> decltype(f.get()) {
    // Is `f` a `cfuture<int>` or `int`?
    // `cfuture<int>` has highest priority
    return f.get();
};
assert(f2.get() == 1);

In this case, the continuation function will attempt to use the unwrapping with the highest priority, which would be cfuture<int>. However, this is not always possible if the unwrapping overloads are ambiguous enough.

The continuation with the highest priority is always the safer and usually more verbose continuation. This means a template continuation will usually unwrap to future<R> over R continuation input variants. On the other hand, this is also useful since the most verbose continuation patterns are the ones that could benefit the most from auto.

Return type unwrapping

Future are allowed to expect other futures:

cfuture<cfuture<int>> f = async([]() {
    return async([]() { return 1; });
});
assert(f.get().get() == 1);

In this example, we can choose to wait for the value of the first future or the value of the future it encapsulates.

Unlike the function std::experimental::future::then in C++ Extensions for Concurrency, this library does not automatically unwrap a continuation return type from future<future<int>> to future<int>. There are two reasons for that: not unwrapping the return type (i) facilitates generic algorithms that operate on futures, and (ii) avoids potentially blocking the executor with two tasks to execute the unwrapping.

However, other algorithms based on the function then can still perform return type unwrapping.

Continuation stop

When a non-shared future has a continuation attached, its value is moved into the continuation. With stoppable futures, this means the stop_source is also moved into the continuation. If the future has already been moved, and we want to request its corresponding task to stop, we can do that through its stop_source.

auto f1 = async([](stop_token st) {
    while (!st.stop_requested()) {
        some_task();
    }
});
auto ss = f1.get_stop_source();
auto f2 = f1 >> []() {
    // f1 done
    handle_success();
};
// f1.request_stop() won't work anymore
ss.request_stop();
f2.get();