Resque::NoQueueError: Jobs must be placed onto a queue.
Anyone get this occasionally?
For us, we started to see this a week or so ago. It crops every couple of days, and then just keeps happening. We fix it by restarting the scheduler process. We're running on Heroku using RedisToGo (resque is basically our only use of Redis). The error is not very illuminating.
2012-07-14T04:09:00+00:00 app[scheduler.1]: 2012-07-14 04:09:00 queueing VeniceHeartbeat (heartbeat_to_venice)
2012-07-14T04:09:00+00:00 app[scheduler.1]: 2012-07-14 04:09:00 Resque::NoQueueError: Jobs must be placed onto a queue.
This job runs all the time, and works well over 99% of the time. Occasionally, though, it just stops working. We have not yet verified if other scheduled jobs also fail to be scheduled.
The config entry for this:
heartbeat_to_venice:
cron: "* * * * *"
class: VeniceHeartbeat
args:
description: "Lets venice know we're alive"
Worth noting, the job absolutely specifies the @queue member, and this job works pretty much all the time.
Here's some stuff out of our Gemfile.lock:
GIT
remote: git://github.com/bvandenbos/resque-scheduler
revision: 9db9e9e2512636d31d9d9f8e0a761b74f3f43408
specs:
resque-scheduler (2.0.0.h)
redis (>= 2.0.1)
resque (>= 1.20.0)
rufus-scheduler
resque (1.20.0)
either VeniceHeartbeat class itself or the yaml file must specify a queue.
In the class it's as @queue = :venice in the yaml it's a line as queue: venice
confirm you have one of those first.
Confirmed. Like I said, everything works great most of the time .
I am getting this exact same issue.
Ruby 1.9.3-p194 with resque 1.20.0
Works 99% of the time, then this particular job will throw Resque::NoQueueError when the queue is clearly defined.
Just experienced this, and after some digging around, it seems this is caused because the scheduler will try to constantize your class, and if it isn't able to, it will use it as a string.
So a call to Resque.queue_from_class(klass) when klass is a string, will return false, thus raising the error.
Solution is to load the environment (rake environment rake:scheduler), require your class in the setup step, or add the queue to the config file.
I'd recommend the latter option, so as to keep the scheduler process light.
@ivanyv Not sure that accounts for the symptoms. Why would that start failing after a while and not right away?
Beats me, maybe something to do with load order or unloading of modules... just shooting in the dark here.
For me it was caused by what I said in my previous comment.
I had this problem, and it ended up that I was doing queue = :foo instead of @queue = :foo in my task class.
Same issue here, I have to specify queue: background in the schedule.yml file.
I have @queue = :background in my job file.
This one got me also. Can anyone provide an example of their lib/tasks/resque.rake and config/schedule.yml file? For some reason I'm not seeing an example of what those should look like.
One theory on the underlying cause: we use two different sets of resque workers that process from different queues. Maybe it is some load order thing where the queue names are dynamically loaded, and it does a retry on one of them before a instance of the second resque worker type has booted. Not sure if I'm expressing it correctly.
It seems like the 'require 'jobs'` shown in the docs would solve this, but for some reason I'm unable to get it to work.
ϟ ~/code/project$ rake resque:work
rake aborted!
cannot load such file -- jobs
/Users/me/code/project/lib/tasks/resque.rake:9:in `block in <top (required)>'
Tasks: TOP => resque:work => resque:preload => resque:setup
(See full trace by running task with --trace)
ϟ ~/code/project$
@barmstrong I have an example of a YAML entry in my problem description. I load them by stage of my app (dev/test/prod). Here's my config/initializers/resque.yml
Resque.redis = REDIS
Resque.schedule = YAML.load_file("#{Rails.root}/config/resque_schedule.yml")
sched = YAML.load_file("#{Rails.root}/config/resque_env_specific_schedule.yml")[STAGE]
Resque.schedule.merge! sched unless sched.nil?
Here's a typical entry from resque_schedule.yml. The file is just a bunch of these entries
autoscale:
cron: "*/5 * * * *"
class: Scaler
args:
description: "Autoscale workers"
queue: core
And from resque_env_specific_schedule
production:
beanstalk_metrics_reporter:
cron: "2-59/4 * * * * "
class: BeanstalkMetricsReporter
args:
description: "Reports on beanstalk metrics"
queue: core
Thanks for the help! I can see a queue entry there. In my case I don't have any jobs that use that cron style. I am just trying to avoid the Resque::NoQueueError error. So not sure if it makes sense for me to create a resque_schedule.yml file and fill it with my queues (and fake cron entries?) to avoid the error.
Or perhaps there is an easier way (such as loading the jobs folder or hardcoding the queue names in somewhere).
My procfile looks like this:
critical_worker: env TERM_CHILD=1 RESQUE_TERM_TIMEOUT=10 NEWRELIC_ENABLE=false bundle exec rake environment resque:work QUEUE=critical
worker: env TERM_CHILD=1 RESQUE_TERM_TIMEOUT=10 NEWRELIC_ENABLE=false bundle exec rake environment resque:work QUEUE=high,low
I think the keyword you want there is QUEUES not QUEUE. Maybe they both work?
Here's an example of mine:
core_worker: env NEW_RELIC_DISPATCHER=resque TERM_CHILD=1 RESQUE_TERM_TIMEOUT=6 QUEUES=core bundle exec rake resque:work
So, I've only had the no queue problem with scheduler (we're in the resque-scheduler project). To my recollection, I've never had the problem outside of scheduler.
Assuming you're on Heroku, you should set the RESQUE_TERM_TIMEOUT to 6. By trial and error I've determined anything more than that and you consistently get killed by Heroku instead of exiting inside Resque. I have a ticket open with Heroku on it, and reported the issue here: https://github.com/resque/resque/issues/1010
Ok interesting, I will try it with RESQUE_TERM_TIMEOUT=6 and see if that does the trick.
I think the QUEUE=high/low is the right syntax:
https://github.com/resque/resque/blob/1-x-stable/README.markdown#workers
But maybe they both work? Or is QUEUES used by resque-scheduler?
QUEUE accepts a single queue or '*'. QUEUES accepts a comma separated list.
Ok I switched to this which uses QUEUES and let it run a few days.
critical_worker: env TERM_CHILD=1 RESQUE_TERM_TIMEOUT=6 NEWRELIC_ENABLE=false QUEUES=critical bundle exec rake environment resque:work
worker: env TERM_CHILD=1 RESQUE_TERM_TIMEOUT=6 NEWRELIC_ENABLE=false QUEUES=high,low bundle exec rake environment resque:work
Unfortunately it looks like I am still getting Resque::NoQueueError periodically. I see some on jobs that are processed by both worker and critical_worker.
Still not sure the underlying cause exactly, but is there any other way to let each process know about all queues?
So, fixing this for me was including the queue in the schedule.yml. Since you're not using cron style and (I'm guessing) just the delayed job style, maybe you should try using enqueue_at_with_queue or other method that lets you include the queue name. Kind of messy and redundant with what's in your workers, but I think that's essentially what we do when we include the queue name in the schedule.yml.
Pretty frustrating. I am debating switching to sidekiq. Not sure if anyone has had success with it? It may have it's own quirks.
@barmstrong sorry for the huge delay. This still plaguing you, or did the Sidekiq switch already happen?
We are having this issue as well, which means that some of our scheduled jobs fail to run at times.
We're using these resque-related gems:
resque (1.25.1)
mono_logger (~> 1.0)
multi_json (~> 1.0)
redis-namespace (~> 1.2)
sinatra (>= 0.9.2)
vegas (~> 0.1.2)
resque-cleaner (0.2.11)
resque (~> 1.0)
resque-lock-timeout (0.4.1)
resque (>= 1.8.0)
resque-scheduler (2.3.1)
redis (>= 3.0.0)
resque (~> 1.25)
rufus-scheduler (~> 2.0)
resque-status (0.4.1)
resque (~> 1.19)
resque_mailer (2.2.6)
actionmailer (>= 3.0)
Here is an example from our schedule:
# Run at 3 am once per day
complete_orders:
cron: 0 3 * * *
class: OrderCompleter
args:
description: Set invoice numbers, voids credit card authorization of voidable orders and send invoices.
And here is the related worker:
class OrderCompleter
extend MetricableJob
extend Resque::Plugins::LockTimeout
@lock_timeout = 1.hour
@queue = :order_high
def self.perform
total_completable = 0
successfully_completed = 0
Order.authorized.find_each do |order|
if order.completable?
total_completable += 1
successfully_completed += 1 if order.complete!
end
end
end
end
@dipth You're awesome! Is it safe to assume you've already tried the workarounds listed above? Does Resque::NoQueueError eventually happen if you specify the queue with a class method instead of an ivar?
On a related note, I've been wanting to crank up the verbosity of debug logging to help with bug reports. Yeah... someone should do that... :smiley_cat:
@meatballhat we're already depending on our environment in the resque task:
require 'resque/tasks'
require 'resque_scheduler/tasks'
task "resque:setup" => :environment
@dipth I don't follow. :confused:
I cleared the milestone. If activity picks back up and we can find a way to reproduce (and test for) the problem, then I'm happy to target a release.
Just started to get this bug, too:
First, I had the queue name set only in the Class file (and it was working for a long time):
class MyThing
@queue = :my_thing
end
Now, I had to add the queue name inside my YML schedule file as well, to make it work again:
MyThing:
every: 5m
queue: my_thing # <= added this to make it work again
description: 'This job does my thing'
Using:
- resque (1.25.2)
- resque-scheduler (2.5.5)
- resque-pool (0.3.0)
@john-999 thanks for the report! The common element seems to be the use of the @queue ivar. Any chance you'd be willing to switch to the class method version to see if we can isolate the issue? For reference:
class MyThing
def self.queue
:my_thing
end
end
I just tried this:
- deleted
queue: my_thingfrom the YML file - deleted
@queue = :my_thingfrom the Class file - added to the Class file:
def self.queue
:my_thing
end
Unfortunately, with this, the error reappeared (but switching back to the previous setup made it work again).
Ugh. Sorry @john-999, this is a slippery one.
@john-999 can you try to just specify the queue in the YML file and let us know if that works or not please?
@meatballhat this looks like the scheduler is not able to find the job class definition when queueing it. No idea at the moment.
@bugant that would make sense. My guess now is that the scheduler process is missing the autoload magic, although I personally believe this is better in most cases so that your scheduler process isn't eating as much memory as your server or worker processes. Maybe the "fix" is to document it better (???)