Skip to content

Fix "too many values to unpack" while raising delayed jobs to waiting state#123

Closed
mattbornski wants to merge 1 commit intobee-queue:masterfrom
mattbornski:master
Closed

Fix "too many values to unpack" while raising delayed jobs to waiting state#123
mattbornski wants to merge 1 commit intobee-queue:masterfrom
mattbornski:master

Conversation

@mattbornski
Copy link
Copy Markdown

{ ReplyError: ERR Error running script (call to f_7c74c5918ff75c88a0db8507b3c0a4ddf34845b3): @user_script:17: user_script:17: too many results to unpack
at parseError (/var/task/node_modules/redis-parser/lib/parser.js:193:12)
at parseType (/var/task/node_modules/redis-parser/lib/parser.js:303:14)
command: 'EVALSHA',
args:
[ '7c74c5918ff75c88a0db8507b3c0a4ddf34845b3',
2,
'bq:queue_name:delayed',
'bq:queue_name:waiting',
1528136012927,
1000 ],
code: 'ERR' }

@coveralls
Copy link
Copy Markdown

coveralls commented Jun 4, 2018

Coverage Status

Coverage remained the same at 100.0% when pulling d9be228 on mattbornski:master into c92207e on bee-queue:master.

1 similar comment
@coveralls
Copy link
Copy Markdown

Coverage Status

Coverage remained the same at 100.0% when pulling d9be228 on mattbornski:master into c92207e on bee-queue:master.

Copy link
Copy Markdown
Member

@skeggse skeggse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey, thanks! This change looks good to me; can you clarify per my comment?

local raising = redis.call("zrangebyscore", KEYS[1], 0, ARGV[1])
local numRaising = #raising
local offset = 0
-- There is a default 1024 element restriction
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this a redis lpush limitation or a lua unpack limitation? can you comment here to clarify?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: s/There/there

Copy link
Copy Markdown
Author

@mattbornski mattbornski Jul 10, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believed this limit was derived from the Lua config parameter, DEFAULT_STACK_SIZE (http://www.lua.org/source/4.0.1/llimits.h.html#DEFAULT_STACK_SIZE); however, upon deeper inspection, Redis uses Lua 5.1 and I can't find an equivalent parameter there. It is possible this 1024 assertion is spurious;
the closest equivalent I can find in Lua 5.1 is LUAI_MAXCSTACK and it's set to 8000 in Redis (http://download.redis.io/redis-stable/deps/lua/src/luaconf.h)

Copy link
Copy Markdown
Member

@skeggse skeggse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like it may or may not be desired behavior, depending on the individual use-case. If there are many, many jobs that have expired and are ready to be reactivated, this could cause a substantial pause in the redis server. Should this be configurable? If we just remove the loop, we'd get a reasonable middle-ground where a subset of the jobs could get activated, and the remainder would get activated on the next call to raiseDelayedJobs. That risks slow growth/bloat of the zset, though. What do you think, if you're still interested in this PR?

@mattbornski
Copy link
Copy Markdown
Author

@skeggse I haven't been using bee-queue for quite a while; I would point out that for people who encounter the error I did, either the PR as-is or your proposed change would be a dramatic improvement, because no jobs were being processed due to the error once a critical size was reached.

@skeggse
Copy link
Copy Markdown
Member

skeggse commented Apr 8, 2020

Closing in favor of #192.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants