Manual Reference Pages - DISPATCH_GET_CURRENT_QUEUE (3)
- where blocks are scheduled for execution
Global Concurrent Queues
.Fd #include <dispatch/dispatch.h>
const char *label dispatch_queue_attr_t attr
const char *
unsigned long flags
Queues are the fundamental mechanism for scheduling blocks for execution within
All blocks submitted to dispatch queues are dequeued in FIFO order.
By default, queues created with
wait for the previously dequeued block to complete before dequeuing the next
block. This FIFO completion behavior is sometimes simply described as a "serial queue."
Queues are not bound to any specific thread of execution and blocks submitted
to independent queues may execute concurrently.
Queues, like all dispatch objects, are reference counted and newly created
queues have a reference count of one.
argument is used to describe the purpose of the queue and is useful during
debugging and performance analysis. By convention, clients should pass a
reverse DNS style label.
If a label is provided, it is copied. If a label is not provided, then
returns an empty C string.
my_queue = dispatch_queue_create("com.example.subsystem.taskXYZ", NULL);
argument is reserved for future use and must be NULL.
Queues may be temporarily suspended and resumed with the functions
respectively. Suspension is checked prior to block execution and is
The dispatch framework provides a default serial queue for the application to use.
This queue is accessed via
Programs must call
at the end of
in order to process blocks submitted to the main queue. (See the compatibility
section for exceptions.)
GLOBAL CONCURRENT QUEUES
Unlike the main queue or queues allocated with
the global concurrent queues schedule blocks as soon as threads become
available (non-FIFO completion order). The global concurrent queues represent
three priority bands:
Blocks submitted to the high priority global queue will be invoked before those
submitted to the default or low priority global queues. Blocks submitted to the
low priority global queue will only be invoked if no blocks are pending on the
default or high priority queues.
function returns NULL on failure.
function always returns a valid C string. An empty C string is returned if the
was NULL creation time.
function returns the default main queue.
function always returns a valid queue. When called from within a block submitted
to a dispatch queue, that queue will be returned. If this function is called from
the main thread before
is called, then the result of
is returned. Otherwise, the result of
will be returned in all other cases.
function never returns.
function updates the target queue of the given dispatch object. The target
queue of an object is responsible for processing the object. Currently only
dispatch queues and dispatch sources are supported by this function. The result
with any other dispatch object type is undefined.
The new target queue is retained by the given object before the previous target
queue is released. The new target queue will take effect between block
executions, but not in the middle of any existing block executions
The priority of a dispatch queue is inherited by its target queue.
In order to change the priority of a queue created with
function to obtain a target queue of the desired priority. The
argument is reserved for future use and must be zero. Passing any value other
than zero may result in a
The target queue of a dispatch source specifies where its event handler and
cancellation handler blocks will be submitted. See
for more information about dispatch sources.
The result of passing the main queue or a global concurrent queue to the first
Directly or indirectly setting the target queue of a dispatch queue to itself is undefined.
Code cannot make any assumptions about the queue returned by
The returned queue may have arbitrary policies that may surprise code that tries
to schedule work with the queue. The list of policies includes, but is not
limited to, queue width (i.e. serial vs. concurrent), scheduling priority,
security credential or filesystem configuration. Therefore,
only be used for identity tests or debugging.
Cocoa applications need not call
Blocks submitted to the main queue will be executed as part of the "common modes"
of the applications main NSRunLoop or CFRunLoop.
However, blocks submitted to the main queue in applications using
are not guaranteed to execute on the main thread.
The dispatch framework is a pure C level API. As a result, it does not catch
exceptions generated by higher level languages such as Objective-C or C++.
catch all exceptions before returning from a block submitted to a dispatch
queue; otherwise the internal data structures of the dispatch framework will be
left in an inconsistent state.
The dispatch framework manages the relationship between dispatch queues and
threads of execution. As a result, applications
delete or mutate objects that they did not create. The following interfaces
be called by blocks submitted to a dispatch queue:
call the following interfaces from a block submitted to a dispatch queue if
and only if they restore the thread to its original state before returning:
rely on the following interfaces returning predictable results between
invocations of blocks submitted to a dispatch queue:
While the result of
may change between invocations of blocks, the value will not change during the
execution of any single block. Because the underlying thread may change beteween
block invocations on a single queue, using per-thread data as an out-of-band
return value is error prone. In other words, the result of calling
is well defined within a signle block, but not across multiple blocks. Also,
one cannot make any assumptions about when the destructor passed to
is called. The destructor may be called between the invocation of blocks on
the same queue, or during the idle state of a process.
The following example code correctly handles per-thread return values:
__block int r;
__block int e;
r = kill(1, 0);
// Copy the per-thread return value to the callee thread
e = errno;
printf("kill(1,0) returned %d and errno %d0, r, e);
Note that in the above example
is a per-thread variable and must be copied out explicitly as the block may be
invoked on different thread of execution than the caller. Another example of
per-thread data that would need to be copied is the use of
As an optimization,
invokes the block on the current thread when possible. In this case, the thread
specific data such as
may persist from the block until back to the caller. Great care should be taken
not to accidentally rely on this side-effect.
Visit the GSP FreeBSD Man Page Interface.
Output converted with manServer 1.07.