A standard executor represents a policy as to how, when, and where a piece of code should be executed.
The standard library does not include executors for parallel algorihms.
Boost.Asio provides a complete
implementation of the proposed standard executors.
Creating an execution context, such as a thread pool
Execution context: place where we can execute functions
A thread pool is an execution context.
An execution context is:
Usually long lived.
Non-copyable.
May contain additional state, such as timers, and threads
Creating an executor from an executor context:
Executor: set of rules governing where, when and how to run a function object
A thread pool has executors that send tasks to it.
Its executor rule is: Run function objects in the pool and nowhere else.
An executor is:
May be long or short lived.
Lightweight and copyable.
May be customized on a fine-grained basis, such as exception behavior,
and order
// Executing directly in the thread pool// Execution behaviour according to eagerness:// - https://github.com/chriskohlhoff/executors// - Dispatch: Run the function object immediately if possible.// Most eager operation.// Might run before dispatch returns.// If inside pool, run immediately.// If outside pool, add to queue.asio::dispatch(ex,[&ex]{// This runs before finishing the functionasio::dispatch(ex,[]{std::cout<<"dispatch b"<<'\n';});std::cout<<"dispatch a"<<'\n';});
1 2 3 4 5 6 7 8 910
// - Post: Submit the function for later execution.// Never immediately in the same thread.// Always adds to pool queue.// Never blocking.asio::post(ex,[&ex]{// This will all run in parallelasio::post(ex,[]{std::cout<<"post b"<<'\n';});asio::post(ex,[]{std::cout<<"post c"<<'\n';});std::cout<<"post a"<<'\n';});
1 2 3 4 5 6 7 8 910111213
// - Defer: Submit the function for later execution.// Least eager.// Implies relationship between calling thread and function.// Used when function is a continuation to the calling function.// The function is added to the queue after the current function// ends. If inside pool, adds to a thread local queue. If outside// pool, add to queue. Thread posting might immediately run it.// Potentially blocking.asio::defer(ex,[&ex]{// This will all run only when this function is overasio::defer(ex,[]{std::cout<<"defer b"<<'\n';});std::cout<<"defer a"<<'\n';});
// A strand is an executor and an executor adapter.// Its rule is: Run function objects according to the underlying// executor’s rules, but also run them in FIFO order and not// concurrently.asio::strand<asio::thread_pool::executor_type>st(ex);
std::promise<int>p;std::future<int>f=p.get_future();autofn=[&p](){std::cout<<"Task 2 executes asynchronously"<<'\n';// "return" 2 by setting the promise valuep.set_value(2);};asio::post(fn);std::cout<<"f.get(): "<<f.get()<<'\n';
123456789
std::promise<int>p;std::future<int>f=p.get_future();autofn=[&p](){std::cout<<"Task 2 executes asynchronously"<<'\n';// "return" 2 by setting the promise valuep.set_value(2);};asio::post(fn);std::cout<<"f.get(): "<<f.get()<<'\n';
1 2 3 4 5 6 7 8 910111213141516
autotask1=asio::post(asio::use_future([]{std::cout<<"Task 1 executes asynchronously"<<'\n';}));autotask2=asio::post(asio::use_future([](){std::cout<<"Task 2 executes in parallel with task 1"<<'\n';return42;}));// something like task3 = task2.then([](int task2_output){...});autotask3=asio::post(asio::use_future([&](){// poll task2 for its resultsinttask2_output=task2.get();std::cout<<"Task 3 executes after task 2, which returned "<<task2_output<<'\n';returntask2_output*3;}));
123456
// something like task4 = when_all(task1, task3);autotask4=asio::post(asio::use_future([&](){task1.wait();autotask3_output=task3.get();returntask3_output;}));
12345678
// something like task5 = task4.then([](std::tuple<void, int>))autotask5=asio::post(asio::use_future([&](){autotask4_output=task4.get();std::cout<<"Task 5 executes after tasks 1 and 3. Task 3 returned "<<task4_output<<"."<<'\n';}));task5.get();std::cout<<"Task 5 has completed"<<'\n';
1234567
for(inti=0;i<20;++i){asio::post(ex,[i]{std::cout<<"Thread "<<i<<" going to sleep"<<'\n';std::this_thread::sleep_for(std::chrono::seconds(1));std::cout<<"Thread "<<i<<" awake"<<'\n';});}
template<classFN,classIterator>std::future<typenameIterator::value_type>parallel_reduce(autoex,Iteratorbegin,Iteratorend,FNfn){autosecond=std::next(begin);constboolis_single_element=second==end;constboolis_single_pair=!is_single_element&&(std::next(second)==end);if(is_single_element){returnmake_ready_future(*begin);}elseif(is_single_pair){returnasio::post(ex,asio::use_future([begin,second,&fn]{returnfn(*begin,*second);}));}else{// we would probably add a heuristic here for small rangessize_tn=std::distance(begin,end);autohalf=std::next(begin,n/2);autolhs=parallel_reduce(ex,begin,half,fn);autorhs=parallel_reduce(ex,half,end,fn);returnmake_ready_future(lhs.get()+rhs.get());}}