Welcome to mpilock’s documentation!¶
mpilock offers a
mpilock.WindowController class with a high-level API
for parallel access to resources. The
mpilock.WindowController can be used
Read operations happen in parallel while write operations will lock the resource and prevent any new read or write operations and will wait for all existing read operations to finish. After the write operation completes the lock is released and other operations can resume.
mpilock.WindowController does not contain any logic to control the
resources, it only locks and synchronizes the MPI processes. Once the
operation permission is obtained it’s up to the user to perform the
reading/writing to the resources.
sync() function is a factory function for
WindowControllers and can simplify creation of
from mpilock import sync from h5py import File # Create a default WindowController on `COMM_WORLD` with the master on rank 0 ctrl = sync() # Fencing is the preferred idiom to fence anyone that isn't writing out of # the writer's code block, and afterwards share a resource with ctrl.single_write() as fence: # Makes anyone without access long jump to the end of the with statement fence.guard() resource = h5py.File("hello.world", "w") # Put a resource to be collected by other processes fence.share(resource) resource = fence.collect() try: # Acquire a parallel read lock, guarantees noone writes while you're reading. with ctrl.read(): data = resource["/my_data"][()] # Acquire a write lock, will block all reading and writing. with ctrl.write(): resource.create_dataset(ctrl.rank, data=data) finally: with ctrl.single_write() as fence: fence.guard() resource.close() # The window controller itself needs to be closed as well (is done atexit) ctrl.close()
Fence(master, access, comm)[source]¶
Can be used to fence off pieces of code from processes that shouldn’t access it. Additionally it can be used to share a resource to all processes that was created within the fenced off code block using
Collect the object that was put to share within the fenced off code block.
- Return type
Kicks out all MPI processes that do not have access to the fenced off code block. Works only within a
withstatement or a
trystatement that catches
Put an object to share with all other MPI processes from within a fenced off code block.
WindowControllermanages the state of the MPI windows underlying the lock functionality. Instances can be created using the
The controller can create read and write locks during which your MPI processes are aware of each other’s operations and a write lock will never be granted if other read or write operations are ongoing, while read locks may be granted while other read operations are ongoing, but not if any write locks are acquired or being requested.
WindowControllerin a closed state? If so, further locks can not be requested.
Return the MPI rank of the master process.
Return the MPI rank of this process.
Acquire a read lock. Read locks can be granted while other read locks are held, but will not start as long as write locks are held or being requested (write operations have priority over read operations).
The preferred idiom for read locks is as follows:
controller = sync() with controller.read(): # Perform reading operation pass
A read lock
Perform a collective operation where only 1 node writes to the resource and the other processes wait for this operation to complete.
Python does not support any long jump patterns so the preferred idiom for collective write locks is the fencing pattern:
controller = sync() with controller.single_write() as fence: # Kick out any processes that don't have to write fence.guard() # Perform writing operation on just 1 process pass # All kicked out processes resume code together outside of the with block.
A fenced write lock.
Acquire a write lock. Will wait for all active read locks to be released and prevent any new read locks from being aqcuired.
The preferred idiom for write locks is as follows:
controller = sync() with controller.write(): # Perform writing operation pass
Keep in mind that if you run this code on multiple processes at the same time that they will write one by one, but they will still all write eventually. If only one of the nodes needs to perform the writing operation see
An unfenced write lock
WindowControllerthat synchronizes read write operations across all MPI processes in the communicator.
comm (int) – MPI communicator
master – Rank of the master of the communicator, will be picked whenever something needs to be organized or decided by a single node in the communicator.
- Return type