Deadlock with toEffect
cats-effect 3.5.4, fs2 3.10.2, and interop-cats 23.1.0.2 and Scala 2.13
This will hang:
package com.goodcover
import cats.effect.IO
import cats.effect.unsafe.IORuntime
import com.typesafe.scalalogging.StrictLogging
import org.scalatest.flatspec.AnyFlatSpec
import zio.ZIO
import zio.interop._
import zio.interop.catz.implicits._
final class IOSpec extends AnyFlatSpec with StrictLogging {
behavior of "ZIO to IO"
it should "not deadlock" in {
var count = 0
while (!Thread.interrupted()) {
count += 1
logger.debug("Running test " + count)
ZIO.succeed(()).toEffect[IO].unsafeRunSync()(IORuntime.global)
}
}
}
What's weird is that there are no live cats fibers when it deadlocks.
This problem looks more fundamental. Something wrong with fiber observers.
This also hangs:
package com.goodcover
import com.typesafe.scalalogging.StrictLogging
import org.scalatest.flatspec.AnyFlatSpec
import zio.interop._
import zio.interop.catz.implicits._
import zio.{Unsafe, ZIO}
import java.util.concurrent.CountDownLatch
final class ZSpec extends AnyFlatSpec with StrictLogging {
behavior of "ZIO to IO"
it should "not deadlock" in {
var count = 0
while (!Thread.interrupted() && count < 100_000) {
count += 1
logger.debug("Running test " + count)
val done = new CountDownLatch(1)
Unsafe.unsafe { implicit unsafe =>
val fiber = rts.unsafe.fork(ZIO.succeed(()))
fiber.unsafe.addObserver { exit =>
logger.debug(s"Done: $exit")
done.countDown()
}
}
done.await()
}
}
}
Ahh I think the fiber is completing before the call to addObserver happens and it doesn't handle that case.
Nope that's not it. It does handle that. However observers is not volatile so there is no happens-before relationship. I bet that is it.
Caused by https://github.com/zio/zio/issues/9040
Actually reopening this so as to ensure this is updated on release and/or worked around.
I think a workaround is possible using onExit rather than a fiber observer.
hey @steinybot is this still happening with the issue in core being fixed?
We tried the snapshot build and it fixed it so I assume it is fixed. I've just updated to 2.1.7 so will see if it is still gone.