machinery
machinery copied to clipboard
SIGINT is ignored if sent before a worker connects to the broker
I'm using redis as my broker and result backend. If I send SIGINT I see a log message saying Signal received: interrupt. Quitting the worker
, but then the process never exits, and it keeps going as if the signal was never sent.
@michael-younkin Is this still an issue? Have you tried with the latest code? I have committed code which fixed one interrupt signal issue so perhaps this is fixed already. If not there is another bug.
I got a similar issue. When I start my service without turning on redis I can't seem to stop it anymore.
Logs:
WARNING: 2017/08/04 17:06:23 worker.go:51 Start consuming error: dial tcp [::1]:6379: getsockopt: connection refused
WARNING: 2017/08/04 17:06:23 retry.go:20 Retrying in 1 seconds
WARNING: 2017/08/04 17:06:24 worker.go:51 Start consuming error: redigo: get on closed pool
WARNING: 2017/08/04 17:06:24 retry.go:20 Retrying in 1 seconds
^CWARNING: 2017/08/04 17:06:24 worker.go:61 Signal received: interrupt. Quitting the worker
WARNING: 2017/08/04 17:06:25 worker.go:51 Start consuming error: redigo: get on closed pool
WARNING: 2017/08/04 17:06:25 retry.go:20 Retrying in 2 seconds
^C^CWARNING: 2017/08/04 17:06:27 worker.go:51 Start consuming error: redigo: get on closed pool
WARNING: 2017/08/04 17:06:27 retry.go:20 Retrying in 3 seconds
Edit: going to use the latest code and check again...
Edit2: seems like I'm on the newest commit:
- name: github.com/RichardKnop/machinery
version: 46eeb95e6208e539826cc64e2f4c51e700c7cb8d
I see. Will take a look later.
Any thoughts about how it can be fixed?
In the broker.go file I just commented in function stopConsuming() following piece of code and I stopped see the issues
// Stop the retry closure earlier
select {
case b.retryStopChan <- 1:
log.WARNING.Print("Stopping retry closue.")
default:
}
I'm sure that this is the incorrect way but for now, I see that worker completeness is much better.
Is there any update? I still have this issue. I run the workers in the different routines in Gin web application.
for i := 0; i < n; i++ {
worker := server.NewWorker(fmt.Sprintf("worker_%v", i), 10)
go func() {
worker.Launch()
}()
// Just for having an instance of workers in a slice
workers = append(workers, *worker)
}
and after that I simply run my gin
server
r.Run(":9080")
Server Log:
[GIN-debug] Listening and serving HTTP on :9080
INFO: 2019/02/18 00:46:48 worker.go:46 Launching a worker with the following settings:
INFO: 2019/02/18 00:46:48 worker.go:47 - Broker: amqp://guest:guest@localhost:5672/
INFO: 2019/02/18 00:46:48 worker.go:49 - DefaultQueue: machinery_tasks
INFO: 2019/02/18 00:46:48 worker.go:53 - ResultBackend: redis://localhost:6379
INFO: 2019/02/18 00:46:48 worker.go:55 - AMQP: machinery_exchange
INFO: 2019/02/18 00:46:48 worker.go:56 - Exchange: machinery_exchange
INFO: 2019/02/18 00:46:48 worker.go:57 - ExchangeType: direct
INFO: 2019/02/18 00:46:48 worker.go:58 - BindingKey: machinery_task
INFO: 2019/02/18 00:46:48 worker.go:59 - PrefetchCount: 0
INFO: 2019/02/18 00:46:48 amqp.go:94 [*] Waiting for messages. To exit press CTRL+C
After sending SIGNINT signal (ctrl+c)
I've got this log:
^CWARNING: 2019/02/18 00:46:56 worker.go:89 Signal received: interrupt
WARNING: 2019/02/18 00:46:56 worker.go:94 Waiting for running tasks to finish before shutting down
WARNING: 2019/02/18 00:46:56 broker.go:101 Stop channel
^CWARNING: 2019/02/18 00:46:58 worker.go:89 Signal received: interrupt
^C^C^C^C^C^C^C^C^C^C^C^C^C
and it keeps running unless I send kill signal like below:
lsof -n -i4TCP:9080 && kill -9 {{PID}}
Is there any update? I still have this issue too.
cnf := &config.Config{
NoUnixSignals: true,
I'm not using machinery anymore, so if someone else wants to take over or close this issue please go ahead. I'm going to unsubscribe. Thanks!