AutoCloseable: Not closing on termination
//| mill-version: 1.0.3
//| mill-jvm-version: 21
//| mvnDeps:
//| - org.http4s::http4s-ember-server:0.23.30
//| - org.http4s::http4s-dsl:0.23.30
import cats.effect._, org.http4s._, org.http4s.dsl.io._
import org.http4s.server.Router
import mill._, scalalib._
import cats.syntax.all._
import com.comcast.ip4s._
import org.http4s.ember.server._
import org.http4s.implicits._
import org.http4s.server.Router
import scala.concurrent.duration._
import cats.effect.unsafe.IORuntime
implicit val runtime: IORuntime = cats.effect.unsafe.IORuntime.global
implicit val ec: scala.concurrent.ExecutionContext = scala.concurrent.ExecutionContext.global
object autoclose extends Module:
def serve = Task.Worker:
new AutoCloseW{}
class AutoCloseW extends AutoCloseable:
val helloWorldService = HttpRoutes.of[IO] {
case GET -> Root / "hello" / name =>
Ok(s"Hello, $name.")
}
val httpApp = Router("/" -> helloWorldService)
val server = EmberServerBuilder
.default[IO]
.withHost(ipv4"0.0.0.0")
.withPort(port"8080")
.withHttpApp(httpApp.orNotFound)
.build
.allocated
val start = server.map(_._1).unsafeToFuture()
override def close =
println("AutoCloseW is closing...")
server.flatMap(_._2).unsafeRunSync()
The above is a tiny mill build which does the (slightly pathological) duty of starting an http4s server.
You should be able to run it with
mill -w autoclose.serve, and for me, it pops up at localhost:8080.
When I press ctrl-c, mill terminates, but the http4s server does not, hence I claim, is not being autoclosed.
@Quafadas please minimize it further
Hmmm, I'm not sure how yet, as I guess I would need to eject from the heavy duty cats stuff. That will take longer.
Take your time!
I made an AI write this;
//| mill-version: 1.0.3
//| mill-jvm-version: 21
import mill._, scalalib._
import java.util.concurrent.atomic.AtomicBoolean
import scala.util.Try
object autoclose extends Module:
def serve = Task.Worker:
new AutoCloseW{}
class AutoCloseW extends AutoCloseable:
private val running = new AtomicBoolean(true)
println("starting")
// Simple background thread that just keeps the worker alive
private val thread = new Thread(new Runnable {
def run(): Unit = {
Try {
val serverSocket = new java.net.ServerSocket(8080)
while (running.get()) {
Try {
val client = serverSocket.accept()
val out = client.getOutputStream
val response = "HTTP/1.1 200 OK\r\nContent-Type: text/plain\r\n\r\nHello World!\r\n"
out.write(response.getBytes())
out.flush()
client.close()
}
Thread.sleep(100)
}
serverSocket.close()
}
}
})
thread.start()
override def close(): Unit = {
running.set(false)
thread.interrupt()
Try(thread.join(1000))
}
Which I think, should do the right thing, but I'm not familiar with these APIs. If it is wrong then I apologise.
for me this command mill -w autoclose.serve starts the server.
and running this in another terminal curl http://localhost:8080 shows it's running.
The same curl will succeed (for me) after destroying mill, which Believe to be a simpler reproduction of the same problem .
Autoclosable workers should close before Mill needs to recreate them. I might be wrong, but I think we currently don't close all workers explicitly on Mill daemon stop. We typically keep them running as long as possible (in the daemon) but assume, that termination of the JVM frees up all bound resources.
I'm not sure I follow 100% tbh - would that make this the expected behaviour? I had thought this would be the "right" way to try and clean up something like an http server, but I"m happy to try another way too.
If I'm not wrong, it is the expected behavior right now. We need to discuss, if we also want to guarantee that workers get closed when Mill shuts down. Of course, it would be a nice property, allowing use cases like yours, but it needs a bit more effort. And if closing a worker involves waiting (like in your example) and we have a lot of them, gracefully closing a Mill daemon and all it's workers could take some significant more time.
That makes sense. So, if you were trying to have mill host an http server - is this the route you would attempt? Or is it simply that this sort of integration isn't actually a great idea (<-- I have long experience with not-great-ideas, so no prejudice on the answer)!
We should at least try to close all worker at Mill shutdown and evaluate it. If that happened, hosting a http server from Mill in a worker might be a use case. Shutting down a docker container might be another.
Okay, so there seems to be a case for leaving this open. I realised that my actual use case isn't transparently set out.
My "use case" was that I was trying to write a doc-site generator, and the http server would have a refresh triggered once mill has generated the site.
It looks like right now we don't close workers on termination, and I guess we probably should. Having the close logic possibly take arbitrarily long is certainly a problem, but we could maybe work around that with UX: if someone tries to run ./mill something while the previous process is shutting down, we could block on it and print to stderr "waiting for previous mill process
Related
- #5810