parsl.concurrent.ParslPoolExecutor
- class parsl.concurrent.ParslPoolExecutor(config: Config | None = None, dfk: DataFlowKernel | None = None, executors: Literal['all'] | list[str] = 'all')[source]
An executor that uses a pool of workers managed by Parsl
Works just like a
ProcessPoolExecutorexcept that tasks are distributed across workers that can be on different machines.Create a new executor using one of two methods:
Supplying a Parsl
Configthat defines how to create new workers. The executor will start a new Parsl Data Flow Kernel (DFK) when it is entered as a context manager.Supplying an already-started Parsl
DataFlowKernel(DFK). The executor assumes you will start and stop the Parsl DFK outside the Executor.
The futures returned by
submit()andmap()are Parsl futures and will work with the same function chaining mechanisms as when using Parsl with decorators.def f(x): return x + 1 @python_app def parity(x): return 'odd' if x % 2 == 1 else 'even' with ParslPoolExecutor(config=my_parsl_config) as executor: future_1 = executor.submit(f, 1) assert parity(future_1) == 'even' # Function chaining, as expected future_2 = executor.submit(f, future_1) assert future_2.result() == 3 # Chaining works with `submit` too
Parsl does not support canceling tasks. The
map()method does not cancel work when one member of the run fails or a timeout is reached andshutdown()does not cancel work on completion.- __init__(config: Config | None = None, dfk: DataFlowKernel | None = None, executors: Literal['all'] | list[str] = 'all')[source]
Create the executor
- Parameters:
config – Configuration for the Parsl Data Flow Kernel (DFK)
dfk – DataFlowKernel of an already-started parsl
executors – List of executors to use for supplied functions
Methods
__init__([config, dfk, executors])Create the executor
get_app(fn)Create a PythonApp for a function
map(fn, *iterables[, timeout, chunksize])Returns an iterator equivalent to map(fn, iter).
shutdown([wait, cancel_futures])Clean-up the resources associated with the Executor.
submit(fn, *args, **kwargs)Submits a callable to be executed with the given arguments.
Attributes
Number of functions currently registered with the executor
- get_app(fn: Callable) PythonApp[source]
Create a PythonApp for a function
- Parameters:
fn – Function to be turned into a Parsl app
- Returns:
PythonApp version of that function
- map(fn: Callable, *iterables: Iterable, timeout: float | None = None, chunksize: int = 1) Iterator[source]
Returns an iterator equivalent to map(fn, iter).
- Parameters:
fn – A callable that will take as many arguments as there are passed iterables.
timeout – The maximum number of seconds to wait. If None, then there is no limit on the wait time.
chunksize – If greater than one, the iterables will be chopped into chunks of size chunksize and submitted to the process pool. If set to one, the items in the list will be sent one at a time.
- Returns:
map(func,
*iterables) but the calls may be evaluated out-of-order.- Return type:
An iterator equivalent to
- Raises:
TimeoutError – If the entire result iterator could not be generated before the given timeout.
Exception – If
fn(*args)raises for any values.
- shutdown(wait: bool = True, *, cancel_futures: bool = False) None[source]
Clean-up the resources associated with the Executor.
It is safe to call this method several times. Otherwise, no other methods can be called after this one.
- Parameters:
wait – If True then shutdown will not return until all running futures have finished executing and the resources used by the executor have been reclaimed.
cancel_futures – If True then shutdown will cancel all pending futures. Futures that are completed or running will not be cancelled.