-
-
Save robertmryan/eb0771e5c96e1d36356f1cf0a49fe256 to your computer and use it in GitHub Desktop.
| @propertyWrapper | |
| struct Synchronized<T>: @unchecked Sendable { | |
| private var _wrappedValue: T | |
| private let lock = NSLock() | |
| var wrappedValue: T { | |
| get { lock.withLock { _wrappedValue } } | |
| set { lock.withLock { _wrappedValue = newValue } } | |
| } | |
| init(wrappedValue: T) { | |
| _wrappedValue = wrappedValue | |
| } | |
| } | |
| final class ComplexData: @unchecked Sendable { | |
| @Synchronized var firstName: String | |
| @Synchronized var lastName: String | |
| init(firstName: String, lastName: String) { | |
| self.firstName = firstName | |
| self.lastName = lastName | |
| } | |
| } | |
| actor Foo { | |
| func process1(lotsOfData: [ComplexData]) async { | |
| await withTaskGroup(of: Void.self) { group in | |
| for data in lotsOfData { | |
| group.addTask { | |
| // Do complex things with complex data, and then give data to another process | |
| await self.process2(data: data) | |
| // I will never ever do anything more with data | |
| } | |
| } | |
| } | |
| } | |
| nonisolated func process2(data: ComplexData) async { | |
| // Do complex things with complex data, and then give data to another process | |
| await process3(data: data) | |
| // I will never ever do anything more with data | |
| } | |
| nonisolated func process3(data: ComplexData) async { | |
| // Do complex things with complex data, and I am done | |
| } | |
| } |
Thank you again so much for your time and advices.
The algorithm we are using is not doing any blocking calls. It is indeed looping through a vast number of elements (millions) (these millions constitutes one of the complexData, and there is also meta-data inside) and applies some maths to each of the elements and to groups of them. There are 3 passes with three different types of formulaes applied to both the original complexData plus the outcome of the previous passes.
We can have a list of thousands of complexData to process per day.
But all in all, I am not really after cancellation. I admit it may be an interesting added feature, but for that's not my goal right now. I had to struggle to convince to move from the old C version to Swift, and we did get loads of benefit from that (including removing dormant bugs). I am now just trying to move it to Swift 6 to see if we could get further benefit, but not directly looking for improvement of performance/features - at least right now. I was surprised that I didn't get so many warnings errors while switching to Swift 6.
But OK, I do get the point that structured concurrency will allow me to introduce some cancellation in the future.
I have been reading a lot about isolation context, and I now think I understand better what they are. Admittedly, 'complexData-N' does not need to move from one isolation context to another. We just need to make sure that complexData_N and complexData_N+1 are processed in parrallel (said differently, as soon as a core becomes free, we need it to start process a complexData. I guess that's exactly what
func process1(lotsOfData: [ComplexData]) async {
await withTaskGroup(of: Void.self) { group in
for data in lotsOfData {
group.addTask {
await self.process2(data: data)
await self.process3(data: data)
}
}
}
}
does.
I am still wondering how a non Sendable data could be moved from one isolation context to another if we promise the sending context will forever forget about that data. Maybe still old-way thinking... but I guess there might be use cases.
Once again, thank you very much.
@Mini-Stef
I don’t need to know precisely what you are doing, but I could provide better counsel if I understood what type of work you are doing. E.g., if your code consists of looping while doing a calculation, then an occasional
await Task.yield()andtry Task.checkCancellation()inside that loop (or every nth iteration) might be sufficient.But then again, if that impacts performance too much and/or you are calling some blocking API over which you have no control, then you might move it out of Swift concurrency and bridge it back with a continuation. I might advise checking out Visualize and optimize Swift concurrency, which says:
That avoids blocking the Swift concurrency cooperative thread pool, which is limited to the number of processors on the device. As they discuss in that video, if you block the threads from the cooperative thread pool, you can deadlock and/or cause other problems. Moving these blocking calls back to GCD, it avoids that potential problem.