bull icon indicating copy to clipboard operation
bull copied to clipboard

-OOM command not allowed when used memory > 'maxmemory'.

Open sschizas opened this issue 8 years ago • 8 comments

After upgrading to bull version 2.0 i stared getting this error:

1|wover | ReplyError: ERR Error running script (call to f_bed0f540aaab40cb5b63c076c00c75504e27424b): @user_script:1: @user_script: 1: -OOM command not allowed when used memory > 'maxmemory'.
1|wover | at JavascriptReplyParser.returnError (/home/apiUser/wover-backend/node_modules/ioredis/lib/redis/parser.js:25:25) 1|wover | at JavascriptReplyParser.run (/home/apiUser/wover-backend/node_modules/ioredis/node_modules/redis-parser/lib/javascript.js:135:18) 1|wover | at JavascriptReplyParser.execute (/home/apiUser/wover-backend/node_modules/ioredis/node_modules/redis-parser/lib/javascript.js:112:10) 1|wover | at Socket. (/home/apiUser/wover-backend/node_modules/ioredis/lib/redis/event_handler.js:107:22) 1|wover | at emitOne (events.js:96:13) 1|wover | at Socket.emit (events.js:188:7) 1|wover | at readableAddChunk (_stream_readable.js:176:18) 1|wover | at Socket.Readable.push (_stream_readable.js:134:10) 1|wover | at TCP.onread (net.js:548:20)

sschizas avatar Dec 19 '16 14:12 sschizas

I have no idea what this can be. But it seems like a limitation with redis. We should then document that maxmemory can not be used with bull.

manast avatar Dec 22 '16 08:12 manast

You may need to check out the removeOnComplete option. See the documentation for more info.

If you implement your own .on('failed', ... then also removeOnFail should be set to 1.

Otherwise redis is being filled with all the old jobs in the completed state.

jsarenik avatar May 18 '17 14:05 jsarenik

@manast The default maxmemory-policy is noeviction which just returns an error on write operations. That may be what @n3trino sees.

@n3trino could you please try to run redis with maxmemory-policy allkeys-lru? Of course it is not the solution, I just want to check if that error is caused by errors from redis because the memory is full and noeviction is set. Thank you.

jsarenik avatar May 18 '17 14:05 jsarenik

Our production ElastiCache has run out of memory due to running with Bull and Redis defaults. It sounds like we either need to remove completed and failed jobs or change our configuration to use a non-default maxmemory-policy (probably allkeys-lru).

The docs should really be improved regarding all of this - seems like an important bit of prerequisite/setup information for Bull.

rinogo avatar Apr 01 '22 01:04 rinogo

I am also facing similar issues it seems below bull configuration does not have any effect and still keys are growing in aws redis:

bull: {
      defaultQueueOptions: {
        defaultJobOptions: {
          removeOnComplete: 1000,
          removeOnFail: 1000
        },
      }
    }

Can anyone please let me know how to resolve this issue ?

lakshay2711 avatar Mar 22 '23 13:03 lakshay2711

@lakshay2711 the size will grow until you reach 1000 completed and 1000 failed jobs. After that it should not increase other than by the jobs you have in wait and delayed statuses.

manast avatar Mar 22 '23 13:03 manast

@manast Sorry I am new to this, can you please let me know when exactly a new key would be created in aws redis, is it when we add a new job to the queue ? If yes, then for each and every request will it would be adding a key to the redis ? And the clean up of the keys would be performed via configurations of completed and failed jobs only ?

lakshay2711 avatar Mar 29 '23 11:03 lakshay2711

A job consumes space and normally you should have sensible settings for removeOnComplete and removeOnFailed

manast avatar Mar 29 '23 12:03 manast