Skip to content

Instantly share code, notes, and snippets.

View djspiewak's full-sized avatar

Daniel Spiewak djspiewak

View GitHub Profile

Where does Cats Effect cancelation become practically unavoidable? I feel like I never really need to think about it.

Timeouts are the easiest practical example to think about. If you contemplate that whole chain, from request handling in the server layer through the fiber which processes that request, making new requests to upstream services, waiting on those results, etc… there's a lot of timeouts involved in that. Any time you have a timeout, you need to recursively cancel everything which rolls up under it, and you need that cancelation to have a few properties:

  • It must be irreversible (if you have a timeout, you can't un-timeout; this also implies a degree of determinism)
  • It must be respected promptly
  • It must backpressure control flow (ensuring that the continuation is not yielded until all finalizers have completed, implying constituent resources are released)
  • It must not create invalid states (use after free) or leaks (related to backpressure)

Some of these are actually in mutual confl

org.scalajs.linker.runtime.UndefinedBehaviorError: java.lang.NullPointerException
at $throwNullPointerException(/Users/daniel/Development/Scala/cats-effect/series-3.6.x/tests/js/target/scala-2.13/cats-effect-tests-test-fastopt/main.js:67:9)
at $n(/Users/daniel/Development/Scala/cats-effect/series-3.6.x/tests/js/target/scala-2.13/cats-effect-tests-test-fastopt/main.js:71:5)
at cats.effect.std.MutexSpec.<init>(/Users/daniel/Development/Scala/cats-effect/series-3.6.x/tests/js/target/scala-2.13/cats-effect-tests-test-fastopt/file:/Users/daniel/Development/Scala/cats-effect/series-3.6.x/tests/shared/src/test/scala/cats/effect/std/MutexSpec.scala:30:37)
at {anonymous}()(/Users/daniel/Development/Scala/cats-effect/series-3.6.x/tests/js/target/scala-2.13/cats-effect-tests-test-fastopt/file:/Users/daniel/Development/Scala/cats-effect/series-3.6.x/tests/shared/src/test/scala/cats/effect/std/MutexSpec.scala:28:13)
at scala.scalajs.reflect.InvokableConstructor.newInstance(/Users/daniel/Development/Scala/cats-ef
java.lang.NullPointerException
java.lang.NullPointerException
java.lang.NullPointerExceptionte 46s
at java.lang.Throwable.<init>(Throwables.scala:11)
at java.lang.Exception.<init>(Throwables.scala:383)
java.lang.NullPointerException
at java.lang.Throwable.<init>(Throwables.scala:11)
java.lang.NullPointerException
at java.lang.Throwable.<init>(Throwables.scala:11)
at java.lang.Throwable.<init>(Throwables.scala:11)
diff --git a/core/src/main/scala-3 b/core/src/main/scala-3
deleted file mode 120000
index 609602e..0000000
--- a/core/src/main/scala-3
+++ /dev/null
@@ -1 +0,0 @@
-scala-2.13
\ No newline at end of file
diff --git a/core/src/main/scala/cats/mtl/Handle.scala b/core/src/main/scala/cats/mtl/Handle.scala
index e8bb1f7..e304b58 100644
package me.katze
import cats.effect.*
import cats.effect.std.Dispatcher
import org.lwjgl.glfw.*
import org.lwjgl.glfw.GLFW.*
import org.lwjgl.opengl.GL11.*
import org.lwjgl.opengl.{GL, GLUtil}
import org.lwjgl.system.MemoryStack.stackPush
import org.lwjgl.system.MemoryUtil.NULL
package me.katze
import cats.effect.*
import cats.effect.std.Dispatcher
import org.lwjgl.glfw.*
import org.lwjgl.glfw.GLFW.*
import org.lwjgl.opengl.GL11.*
import org.lwjgl.opengl.{GL, GLUtil}
import org.lwjgl.system.MemoryStack.stackPush
import org.lwjgl.system.MemoryUtil.NULL
* thread #13
* frame #0: 0x0000000105004c58 cats-effect-tests-test`Synchronizer_acquire at Synchronizer.c:300:9
frame #1: 0x00000001050033b4 cats-effect-tests-test`Heap_Collect(heap=0x00000001085a51a0, stack=0x00000001085a5238) at Heap.c:170:10
frame #2: 0x0000000105001148 cats-effect-tests-test`Allocator_allocSlow(allocator=0x000000014c604e70, heap=0x00000001085a51a0, size=48) at Allocator.c:225:9
frame #3: 0x000000010500129c cats-effect-tests-test`Allocator_Alloc(heap=0x00000001085a51a0, size=48) at Allocator.c:251:16
frame #4: 0x0000000105004548 cats-effect-tests-test`scalanative_GC_alloc_small(info=0x00000001085a17c0, size=48) at ImmixGC.c:56:31
frame #5: 0x0000000104fa0d48 cats-effect-tests-test`M33scala.collection.immutable.Range$D5applyiiL42scala.collection.immutable.Range$ExclusiveEO(this=<unavailable>, start=0, end=0) at Range.scala:568:54
frame #6: 0x000000010474caec cats-effect-tests-test`M22scala.runtime.RichInt$D15until$extensioniiL32scala.collection.immutable.RangeEO(th
* thread #1: tid = 0x26b505f, 0x000000018a83e358 libsystem_kernel.dylib`__recvfrom + 8, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP
thread #2: tid = 0x26b5063, 0x000000018a83d5cc libsystem_kernel.dylib`__psynch_cvwait + 8
thread #3: tid = 0x26b5064, 0x000000018a83d5cc libsystem_kernel.dylib`__psynch_cvwait + 8
thread #4: tid = 0x26b50b2, 0x000000018a83d5cc libsystem_kernel.dylib`__psynch_cvwait + 8
thread #5: tid = 0x26b50b3, 0x000000018a83d5cc libsystem_kernel.dylib`__psynch_cvwait + 8
thread #6: tid = 0x26b50b4, 0x000000018a83d5cc libsystem_kernel.dylib`__psynch_cvwait + 8
thread #7: tid = 0x26b50b5, 0x000000018a83d5cc libsystem_kernel.dylib`__psynch_cvwait + 8
thread #8: tid = 0x26b50b6, 0x000000018a83d5cc libsystem_kernel.dylib`__psynch_cvwait + 8
thread #9: tid = 0x26b50b7, 0x000000018a845bdc libsystem_kernel.dylib`kevent64 + 8
thread #10: tid = 0x26b50b8, 0x000000018a83d5cc libsystem_kernel.dylib`__psynch_cvwait + 8
"core PC state machine" should {
import cats.effect.kernel.{GenConcurrent, Outcome}
import cats.effect.kernel.implicits._
import cats.syntax.all._
type F[A] = PureConc[Int, A]
val F = GenConcurrent[F]
"run finalizers when canceling never" in {
val t = for {
/*
* There are two fundamental modes here: sequential and parallel. There is very little overlap
* in semantics between the two apart from the submission side. The whole thing is split up into
* a submission queue with impure enqueue and cancel functions which is drained by the `Worker` and an
* internal execution protocol which also involves a queue. The `Worker` encapsulates all of the
* race conditions and negotiations with impure code, while the `Executor` manages running the
* tasks with appropriate semantics. In parallel mode, we shard the `Worker`s according to the
* number of CPUs and select a random queue (in impure code) as a target. This reduces contention
* at the cost of ordering, which is not guaranteed in parallel mode. With sequential mode, there
* is only a single worker.