![]() |
![]()
| ![]() |
![]()
NAME
SYNOPSIS
void
void
void
void
DESCRIPTIONThe
These functions support efficient temporal synchronization, background concurrency and data-level concurrency. These same functions can also be used for efficient notification of the completion of asynchronous blocks (a.k.a. callbacks). TEMPORAL SYNCHRONIZATIONSynchronization is often required when multiple threads of execution access shared data concurrently. The simplest form of synchronization is mutual-exclusion (a lock), whereby different subsystems execute concurrently until a shared critical section is entered. In the pthread(3) family of procedures, temporal synchronization is accomplished like so: int r = pthread_mutex_lock(&my_lock); assert(r == 0); // critical section r = pthread_mutex_unlock(&my_lock); assert(r == 0); The
dispatch_sync(my_queue, ^{ // critical section }); In addition to providing a more concise expression of synchronization, this approach is less error prone as the critical section cannot be accidentally left without restoring the queue to a reentrant state. The
dispatch_async(my_queue, ^{ // critical section }); BACKGROUND CONCURRENCY
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,0), ^{ // background operation }); This approach is an efficient replacement for pthread_create(3). COMPLETION CALLBACKSCompletion callbacks can be accomplished via nested calls to the
void async_read(object_t obj, void *where, size_t bytes, dispatch_queue_t destination_queue, void (^reply_block)(ssize_t r, int err)) { // There are better ways of doing async I/O. // This is just an example of nested blocks. dispatch_retain(destination_queue); dispatch_async(obj->queue, ^{ ssize_t r = read(obj->fd, where, bytes); int err = errno; dispatch_async(destination_queue, ^{ reply_block(r, err); }); dispatch_release(destination_queue); }); } RECURSIVE LOCKSWhile As the dispatch framework was designed, we studied recursive locks. We found that the vast majority of recursive locks are deployed retroactively when ill-defined lock hierarchies are discovered. As a consequence, the adoption of recursive locks often mutates obvious bugs into obscure ones. This study also revealed an insight: if reentrancy is unavoidable, then reader/writer locks are preferable to recursive locks. Disciplined use of reader/writer locks enable reentrancy only when reentrancy is safe (the "read" side of the lock). Nevertheless, if it is absolutely necessary, what follows is an imperfect way of implementing recursive locks using the dispatch framework: void sloppy_lock(object_t object, void (^block)(void)) { if (object->owner == pthread_self()) { return block(); } dispatch_sync(object->queue, ^{ object->owner = pthread_self(); block(); object->owner = NULL; }); } The above example does not solve the case
where queue A runs on thread X which calls
IMPLIED REFERENCESSynchronous functions within the dispatch framework hold an implied reference on the target queue. In other words, the synchronous function borrows the reference of the calling function (this is valid because the calling function is blocked waiting for the result of the synchronous function, and therefore cannot modify the reference count of the target queue until after the synchronous function has returned). For example: queue = dispatch_queue_create("com.example.queue", NULL); assert(queue); dispatch_sync(queue, ^{ do_something(); //dispatch_release(queue); // NOT SAFE -- dispatch_sync() is still using 'queue' }); dispatch_release(queue); // SAFELY balanced outside of the block provided to dispatch_sync() This is in contrast to asynchronous functions which must retain both the block and target queue for the duration of the asynchronous operation (as the calling function may immediately release its interest in these objects). FUNDAMENTALSConceptually,
The
The
SEE ALSOdispatch(3), dispatch_apply(3), dispatch_once(3), dispatch_queue_create(3), dispatch_semaphore_create(3)
|