Foundation offers a Thread class, internally based on pthread
, that can be used to create new threads and execute closures.
// Detaches a new thread and uses the specified selector as the thread entry point.
Thread.detachNewThreadSelector(selector: Selector>, toTarget: Any, with: Any)
// Subclass
class MyThread: Thread {
override func main() { ... }
}
// iOS 10+ closure based API
var t = Thread {
print("Started!")
}
- Semaphore — allows up to N threads to access a given region of code at a time.
- Mutex — ensures that only one thread is active in a given region of code at a time. You can think of it as a semaphore with a maximum count of 1.
- Spinlock — causes a thread trying to acquire a lock to wait in a loop while checking if the lock is available. It is efficient if waiting is rare, but wasteful if waiting is common.
- Read-write lock — provides concurrent access for read-only operations, but exclusive access for write operations. Efficient when reading is common and writing is rare.
- Recursive lock — a mutex that can be acquired by the same thread many times.
Mutex that Foundation offers. NSLock
and the other Foundation’s locks are unfair, meaning that when a series of threads is waiting to acquire a lock, they will not acquire it in the same order in which they originally tried to lock it.
A lower-level C pthread_mutex_t
is also available in Swift. It can be configured both as a mutex and a recursive lock.
let lock = NSLock()
lock.lock()
lock.unlock()
A lock that may be acquired multiple times by the same thread without causing a deadlock.
let recursiveLock = NSRecursiveLock()
recursiveLock.lock()
recursiveLock.unlock()
A lock that can be associated with specific, user-defined conditions.
let NO_DATA = 1
let GOT_DATA = 2
let conditionLock = NSConditionLock(condition: NO_DATA)
conditionLock.lock(whenCondition: NO_DATA)
...
conditionLock.unlock(withCondition: GOT_DATA)
A condition variable whose semantics follow those used for POSIX-style conditions.
let condition = NSCondition()
var available = false
...
cond.lock()
// Perform work
available = true
cond.signal() // Signals the condition, waking up one thread waiting on it.
cond.unlock()
...
cond.lock()
while !available {
cond.wait() // Blocks the current thread until the condition is signaled.
}
// Perform work
cond.unlock()
In Swift you can’t create a @synchronized
block out of the box as you would do in Objective-C, since there is no equivalent keyword available. On Darwin, with a bit of code you could roll out something similar to the original implementation of @synchronized
using objc_sync_enter(OBJ)
and objc_sync_exit(OBJ)
Based on the benchmark results, DispatchQueue
must be your best choice for creating a critical section in your code. If for some reason the block-based locking nature of DispatchQueue
is not what you need, I’d suggest to go with NSLock
.
GCD provides and manages FIFO queues to which your application can submit tasks in the form of block objects. Work submitted to dispatch queues are executed on a pool of threads fully managed by the system. No guarantee is made as to the thread on which a task executes.
Base class for many dispatch types, including DispatchQueue
, DispatchGroup
, and DispatchSource
.
let object = DispatchObject() // Init unavailable.
object.activate() // Activates the dispatch object.
object.suspend() // Suspends the invocation of block objects on a dispatch object.
object.resume() // Resume the invocation of block objects on a dispatch object.
Suspend and resume calls are asynchronous and take effect only between the execution of blocks. Suspending a queue does not cause an already executing block to stop.
A dispatch queue can be either serial, so that work items are executed one at a time, or it can be concurrent, so that work items are dequeued in order, but run all at once and can finish in any order. Both serial and concurrent queues process work items in first in, first-out (FIFO) order.
let serialQueue = DispatchQueue(label: "com.app.serial")
let concurrentQueue =
DispatchQueue(label: "com.app.concurrent",
qos: .background, // QoS of queue.
attributes: [.concurrent, .initiallyInactive], // serial is default, active is default.
autoreleaseFrequency: .workItem, // Drain pool for each item executed (inherit., .never).
target: serialQueue) // A dispatch queue's priority is inherited from its target queue).
let mainQueue = DispatchQueue.main
let globalDefault = DispatchQueue.global()
let globalQueue = DispatchQueue.global(qos: .userInteractive)
.userInteractive
: Used for work directly involved in providing an interactive UI.userInitiated
: Used for performing work that has been explicitly requested by the user.default
: This QoS is not intended to be used by developers to classify work..utility
: Used for performing work which the user is unlikely to be immediately waiting for the results..background
: Used for work that is not user initiated or visible..unspecified
On iPhones, discretionary and background operations, including networking, are paused when Low Power Mode is enabled
Each work item can be executed either synchronously or asynchronously. When a work item is executed synchronously with the sync
method, the program waits until execution finishes before the method call returns. When a work item is executed asynchronously with the async
method, the method call returns immediately.
globalQueue.sync { }
globalQueue.async(qos: .background) { } // Can specify: Group, QoS, Flags.
globalQueue.asyncAfter(deadline: .now() + .seconds(5)) { } // Async after 5 seconds.
DispatchQueue.concurrentPerform(iterations: 5) { } // Execute multiple iterations synchronously.
inactiveQueue.activate() // Inactive queue should be activated.
When the barrier block reaches the front of a private concurrent queue, it is not executed immediately. Instead, the queue waits until its currently executing blocks finish executing. At that point, the barrier block executes by itself. Any blocks submitted after the barrier block are not executed until the barrier block completes.
If the queue you pass to this function is a serial queue or one of the global concurrent queues, this function behaves like the async
function.
globalQueue.async(flags: .barrier) { }
In Swift 3 there is no equivalent of dispatch_once, a function used most of the times to build thread-safe singletons. Swift guarantees that global variables are initialized atomically and if you consider that constants can’t change their value after initialization, these two properties make global constants a great candidate to easily implement singletons:
public static let sharedInstance: Singleton = Singleton()
Grouping blocks allows for aggregate synchronization. Your application can submit multiple blocks and track when they all complete, even though they might run on different queues. This behavior can be helpful when progress can’t be made until all of the specified tasks are complete.
let group = DispatchGroup()
globalQueue.async(group: group) { } // Add work item to group.
group.notify(queue: globalQueue) { } // Schedules a block to be submitted to a queue when a group of previously submitted block objects have completed.
group.wait() // Waits synchronously for the previously submitted work to complete.
group.wait(timeout: .now() + .seconds(5)) // Waits synchronously for the previously submitted work to complete, and returns if the work is not completed before the specified timeout period has elapsed.
group.enter() // Explicitly indicates that a block has entered the group.
group.leave() // Explicitly indicates that a block in the group has completed.
Encapsulates work that can be performed. A work item can be dispatched onto a DispatchQueue and within a DispatchGroup. A DispatchWorkItem can also be set as a DispatchSource event, registration, or cancel handler.
let workItem = DispatchWorkItem { } // Create work item with closure.
globalQueue.async(execute: workItem) // Execute work item on globalQueue.
workItem.perform() // Perform item on current queue.
workItem.notify(queue: DispatchQueue.main) { } // Performs on completion.
workItem.wait() // Wait while work item will be executed. Elevates priority of current queue.
workItem.cancel() // Cancel item if it not performing yet.
A dispatch semaphore is an efficient implementation of a traditional counting semaphore. Dispatch semaphores call down to the kernel only when the calling thread needs to be blocked. If the calling semaphore does not need to block, no kernel call is made.
let semaphore = DispatchSemaphore(value: 5)
semaphore.wait() // Waits for, or decrements, a semaphore.
semaphore.wait(timeout: .now() + .seconds(5)) // Returns .success if a dispatch operation successfully finished before the specified timeout. .timedOut in other case.
dispatchPrecondition(condition: .notOnQueue(mainQueue))
dispatchPrecondition(condition: .onQueue(globalQueue))
Provides an interface for monitoring low-level system objects such as Mach ports, Unix descriptors, Unix signals, and VFS nodes for activity and submitting event handlers to dispatch queues for asynchronous processing when such activity occurs.
- Timer Dispatch Sources: Used to generate events at a specific point in time or periodic events (
DispatchSourceTimer
). - Signal Dispatch Sources: Used to handle UNIX signals (
DispatchSourceSignal
). - Memory Dispatch Sources: Used to register for notifications related to the memory usage status (
DispatchSourceMemoryPressure
). - Descriptor Dispatch Sources: Used to register for different events related to files and sockets (
DispatchSourceFileSystemObject
,DispatchSourceRead
,DispatchSourceWrite
). - Process dispatch sources: Used to monitor external process for some events related to their execution state (
DispatchSourceProcess
). - Mach related dispatch sources: Used to handle events related to the IPC facilities of the Mach kernel (
DispatchSourceMachReceive
,DispatchSourceMachSend
).
let timer = DispatchSource.makeTimerSource()
timer.setEventHandler { } // Sets the event handler work item for the dispatch source.
timer.schedule(deadline: .now() + .seconds(5))
timer.activate() // Activates the dispatch source.
timer.cancel() // Asynchronously cancels the dispatch source, preventing any further invocation of its event handler block.
private let queue = DispatchQueue(label: "ccom.app.serial")
private var underlyingFoo = 0
var foo: Int {
get {
return queue.sync { underlyingFoo }
}
set {
queue.sync { [weak self] in // Can be .async(flags: .barrier) for async write
self?.underlyingFoo = newValue
}
}
}
API built on top of GCD, that uses concurrent queues and models tasks as Operations
.
Operation
- An abstract class that represents the code and data associated with a single task.
BlockOperation
- An operation that manages the concurrent execution of one or more blocks.
OperationQueue
- A queue that regulates the execution of operations.
It is safe to use a single OperationQueue
object from multiple threads without creating additional locks to synchronize access to that object.
var queue = OperationQueue()
queue.qualityOfService = .userInitiated // The default service level to apply to operations executed using the queue.
queue.maxConcurrentOperationCount = 2 // The maximum number of queued operations that can execute at the same time.
queue.addOperation { } // Wraps the specified block in an operation and adds it to the receiver.
let operation = BlockOperation { }
operation.queuePriority = .high // Priority of operation
queue.addOperation(operation) // Adds the specified operations to the queue.
queue.isSuspended = true //
isReady
- is ready to execute.isExecuting
- is actively working on its assigned task.isFinished
- finished its task successfully or was cancelled and is exiting.isCancelled
- lets clients know that the cancellation of an operation was requested.
The Operation
class provides the basic logic to track the execution state of your operation but otherwise must be subclassed to do any real work. When you subclass Operation
, you must make sure that any overridden methods remain safe to call from multiple threads
Dependencies are a convenient way to execute operations in a specific order. You can add and remove dependencies for an operation using the addDependency(_:)
and removeDependency(_:)
methods.
Operations within a queue are organized according to their readiness, priority level, and interoperation dependencies, and are executed accordingly. If all of the queued operations have the same queuePriority and are ready to execute when they are put in the queue — that is, their isReady
property returns true — they’re executed in the order in which they were submitted to the queue. Otherwise, the operation queue always executes the one with the highest priority relative to the other ready operations.
Canceling an operation does not immediately force it to stop what it is doing. Although respecting the value in the isCancelled
property is expected of all operations, your code must explicitly check the value in this property and abort as needed. The default implementation of Operation
includes checks for cancellation. For example, if you cancel an operation before its start()
method is called, the start()
method exits without starting the task.
operation.cancel() //
queue.cancelAllOperations() // Cancels all queued and executing operations.
- Race conditions / Readers-Writers Problem: With multiple threads operating on the same data.
- Resources contention: Multiple threads trying to access the same resources will increase the amount of time needed to obtain the required resources safely.
- Deadlocks: Multiple threads waiting for each other to release the resources/locks they need forever.
- Starvation: A thread could never be able to acquire the resource.
- Priority Inversion: A thread with lower priority could keep acquiring resources needed for a thread with higher priority.
- Non-determinism and Fairness: We can’t make assumptions on when and in what order a thread will be able to acquire a shared resource. But concurrency primitives used to guard a critical section can also be built to be fair or to support fairness, guaranteeing access to the critical section to all the threads that are waiting, also respecting the request order.
Sources:
- http://www.vadimbulavin.com/atomic-properties/
- http://www.vadimbulavin.com/benchmarking-locking-apis/
- https://www.uraimo.com/2017/05/07/all-about-concurrency-in-swift-1-the-present/
- https://developer.apple.com/library/archive/documentation/Performance/Conceptual/EnergyGuide-iOS/PrioritizeWorkWithQoS.html
- https://developer.apple.com/library/archive/documentation/General/Conceptual/ConcurrencyProgrammingGuide/Introduction/Introduction.html
- https://developer.apple.com/documentation/dispatch
Wow this is really a great write-up! Thank you!