| Member Function | Description | Parameters | Code Sample |
|---|---|---|---|
explicit ThreadPool( size_type thr_num = std::thread::hardware_concurrency(), Conditioner pre_conditioner = Conditioner { }, Conditioner post_conditioner = Conditioner { }); ThreadPool(const ThreadPool &) = delete; ThreadPool &operator = (const ThreadPool &) = delete; |
Conditioner struct represents (is a wrapper around) a single executable function. See ThreadPool.h. It is used in ThreadPool to specify user custom initializers and cleanups per thread. ThreadPool has only one constructor with all default values. Conditioner(s) are a handy interface, if threads need to be initialized before doing anything. And/or they need a cleanup before exiting. For example, see Windows CoInitializeEx function in COM library. See conditioner_test() in thrpool_tester.cc file for code sample |
thr_num: Number of initial threads -- defaulted to number of cores in the computer. pre_conditioner: A function that will execute at the start of each thread in the pool once post_conditioner: A function that will execute at the end of each thread in the pool once |
Code Sample |
template<typename F, typename ... As> dispatch_res_t<F, As ...> dispatch(bool immediately, F &routine, As && ... args); |
It dispatches a task into the thread-pool queue. If a thread is available the task will run. If no thread is available, the task will be added to a queue until a thread becomes available. Dispatch interface is identical to std::async() with one exception. dispatch() returns a std::future of the type of your callable return type |
immediately: A boolean flag, If true and no thread is available, a thread will be added to the pool and the task runs immediately. routine: A callable reference args ...: A variadic list of parameters matching your callable parameter list |
Code Sample |
template<typename F, typename I, typename ... As> loop_res_t<F, I, As ...> parallel_loop(I begin, I end, F &&routine, As && ... args); |
It parallelizes a big class of problems very conveniently. It divides the data elements between begin and end to n blocks by dividing by number of capacity threads. It dispatches the n tasks. parallel_loop() returns a std::vector of std::future corresponding to the above n tasks. |
begin: An iterator to mark the beginning of the sequence. It can either be proper a iterator or integral type index end: An iterator to mark the end of the sequence. It can either be proper a iterator or integral type index routine: A reference to a callable args ...: A variadic list of parameters matching your callable parameter list |
Code Sample |
template<std::random_access_iterator I, std::size_t TH = 5000L> void parallel_sort(I begin, I end); |
It uses a parallel version of 3-way quick sort. This version calls the quick sort with std::less as comparison functor |
begin: An iterator to mark the beginning of the sequence. end: An iterator to mark the end of the sequence TH (template param):A threshold value below which a serialize sort will be used, defaulted to 5,000 |
Code Sample |
template<std::random_access_iterator I, typename P, std::size_t TH = 5000L> void parallel_sort(I begin, I end, P compare); |
It uses a parallel version of 3-way quick sort. This version requires a comparison functor |
begin: An iterator to mark the beginning of the sequence. end: An iterator to mark the end of the sequence compare: Comparison functor TH (template param):A threshold value below which a serialize sort will be used, defaulted to 5,000 |
Code Sample |
void attach(thread_type &&this_thr); |
It attaches the current thread to the pool so that it may be used for executing submitted tasks. It blocks the calling thread until the pool is shutdown or the thread is timed-out. This is handy, if you already have thread(s), and want to repurpose them | this_thr: The parameter is a rvalue reference to the std::thread instance of the calling thread |
Code Sample |
bool run_task(); |
If the pool is not shutdown and there is a pending task in the queue, it runs it on the calling thread synchronously. It returns true, if a task was executed, otherwise false. The return value of the task could be obtained from the original std::future object when it was dispatched. NOTE: A false return from run_task() does not necessarily mean there were no tasks in thread pool queue. It might be that run_task() just encountered one of the thread pool internal maintenance tasks which it ignored and returned false. |
Code Sample |
|
bool add_thread(size_type thr_num); |
At any point you can add/subtract threads to/from the pool | thr_num: Number of threads to be added/subtracted. It can either be a positive or negative number |
Code Sample |
size_type available_threads() const; |
At any point you can query the ThreadPool for available threads. |
Code Sample |
|
size_type capacity_threads() const; |
At any point you can query the ThreadPool for capacity threads. |
Code Sample |
|
size_type pending_tasks() const; |
At any point you can query the ThreadPool for number of tasks currently waiting in the global queue |
Code Sample |
|
bool shutdown(); |
At any point you can call shutdown() to signal the ThreadPool to terminate all threads after they are done running tasks. After shutdown, you cannot dispatch or add threads anymore -- exception will be thrown. |
Code Sample |
|
bool is_shutdown() const; |
It returnstrue if the pool is shutdown, otherwise false |
Code Sample |
|
~Threadpool();
|
It calls shutdown() and waits until all threads are done running tasks |