Fix "too many values to unpack" while raising delayed jobs to waiting state#123
Fix "too many values to unpack" while raising delayed jobs to waiting state#123mattbornski wants to merge 1 commit intobee-queue:masterfrom
Conversation
1 similar comment
skeggse
left a comment
There was a problem hiding this comment.
Hey, thanks! This change looks good to me; can you clarify per my comment?
| local raising = redis.call("zrangebyscore", KEYS[1], 0, ARGV[1]) | ||
| local numRaising = #raising | ||
| local offset = 0 | ||
| -- There is a default 1024 element restriction |
There was a problem hiding this comment.
is this a redis lpush limitation or a lua unpack limitation? can you comment here to clarify?
There was a problem hiding this comment.
I believed this limit was derived from the Lua config parameter, DEFAULT_STACK_SIZE (http://www.lua.org/source/4.0.1/llimits.h.html#DEFAULT_STACK_SIZE); however, upon deeper inspection, Redis uses Lua 5.1 and I can't find an equivalent parameter there. It is possible this 1024 assertion is spurious;
the closest equivalent I can find in Lua 5.1 is LUAI_MAXCSTACK and it's set to 8000 in Redis (http://download.redis.io/redis-stable/deps/lua/src/luaconf.h)
skeggse
left a comment
There was a problem hiding this comment.
This seems like it may or may not be desired behavior, depending on the individual use-case. If there are many, many jobs that have expired and are ready to be reactivated, this could cause a substantial pause in the redis server. Should this be configurable? If we just remove the loop, we'd get a reasonable middle-ground where a subset of the jobs could get activated, and the remainder would get activated on the next call to raiseDelayedJobs. That risks slow growth/bloat of the zset, though. What do you think, if you're still interested in this PR?
|
@skeggse I haven't been using bee-queue for quite a while; I would point out that for people who encounter the error I did, either the PR as-is or your proposed change would be a dramatic improvement, because no jobs were being processed due to the error once a critical size was reached. |
|
Closing in favor of #192. |
{ ReplyError: ERR Error running script (call to f_7c74c5918ff75c88a0db8507b3c0a4ddf34845b3): @user_script:17: user_script:17: too many results to unpack
at parseError (/var/task/node_modules/redis-parser/lib/parser.js:193:12)
at parseType (/var/task/node_modules/redis-parser/lib/parser.js:303:14)
command: 'EVALSHA',
args:
[ '7c74c5918ff75c88a0db8507b3c0a4ddf34845b3',
2,
'bq:queue_name:delayed',
'bq:queue_name:waiting',
1528136012927,
1000 ],
code: 'ERR' }