Implement env["puma.mark_as_io_bound"]#3816
Conversation
nateberkopec
left a comment
There was a problem hiding this comment.
I feel like I'm still chewing on the interface.
env["puma.deferred_request"].callRight interface or not?
I'm mostly concerned about what happens when someone screws this up and puts CPU-bound work in to the deferred pool, or if that work unexpectedly becomes CPU bound. In what ways are they worse off than they were before? How can we correct from this condition, if at all?
This feature trades a risk of increased latency for more throughput. Is that a good tradeoff? Should we just say "f off, put these workloads on different machines and don't try to mix them?" Is the project fundamentally doomed?
Looks good to me. I'm just not 100% certain about the terminology?
Is this really a concern? I think we're all consenting adults here. People already routinely crank up |
I can say I have done this while looking for a feature like this because we have a huge amount of requests that are waiting due to an API design that was optimized for the user at the cost of server resources.
If I understand this correctly, in a Rails controller if you want to defer the request you'd call |
Well
That's usually how rack interfaces work. I do plan to integrate it better with Rails though. |
I figured as much was likely the case that it would be on frameworks to make it work nicely. |
|
This looks pretty neat/simple to me. |
It think that's exactly what's being asked for? #3777 (comment), #3777 (comment)
I think providing this could be seen as along the lines of this earlier reply of yours @nateberkopec #3034 (comment)
I don't think so if people are getting value from I can see how this can make it harder to reason about Puma, number of threads and capacity. "give me some extra threads, but please don't count them" :). I think I see this as something frameworks can use instead of |
nateberkopec
left a comment
There was a problem hiding this comment.
I'll remove my blocks on whether or not we should do this at all, but I do think the class and the env key need better names.
|
If Rails would take advantage how this, how would Rails detect that it's supported by the running web server? |
Like other optional rack features. Just check if the key is present in the |
cdaf88b to
0174cc9
Compare
|
I started some renaming and added a couple unit tests. I'm traveling today, so not sure how much more I'll be able to do, but it should already look a bit better. |
0174cc9 to
df18bf4
Compare
env["puma.mark_as_io_bound"]
joshuay03
left a comment
There was a problem hiding this comment.
Very keen for this!
Just some feedback on naming based on previous reviews.
We really should rename @workers in ThreadPool to @processors in a follow-up, it's confusing even in the current state.
Agreed. But I wanted to minimize diff size for review purposes. If you are interested I can do a PR just for that, and then rebase this one on top. |
df18bf4 to
e2253b9
Compare
|
@joshuay03 I addressed your feedback. |
e2253b9 to
55011b6
Compare
Yeah fair, up to you whether you want to PR and rebase or wait and follow up. I'll leave the final thumbs up on this to @nateberkopec |
|
Any news? |
|
So if I'm reading this right this would basically fix action controller live? I love the API but it's basically unusable in its current state |
55011b6 to
0376659
Compare
0376659 to
bbea749
Compare
|
@dentarg @nateberkopec anything I can do here 🙏 ? |
|
@byroot update the stats? |
|
I just randomly came here because I wanted to connect this with the datastar-ruby gem that I happened to be looking and, and noticed this code/comment https://github.com/starfederation/datastar-ruby/blob/226deb42fda9d0ac0d2cf99e9d5438718420b786/lib/datastar/dispatcher.rb#L327-L332 Sounds like that gem could potentially make use of this |
Right sorry, I guess I was asking if you had opinions on how to display it or if I should do what I think is best? |
Yes, that code is very similar to what AS::Live does. |
bbea749 to
76f98d3
Compare
Done! |
|
👋 |
76f98d3 to
edababf
Compare
|
I rebased the PR again. How can I get this unblocked? |
Can you have a look at the latest failures. I'll aim to do a final review and merge this weekend. |
Fix: puma#3777 This is only meant as a rought prototype to better illustrate the idea. There's no extra tests to speak of, but the crux of the logic is here. Workers can now be marked as "IO bound". This state last until the end of the current request. At the end of the request, the pool decide whether the "IO bound" thread rejoin the normal population, or is abandonned. Generally speaking, an "IO bound" thread no longer counts towards the `max_threads` configuration. However, there is a `max_io_threads` configuration, and any "IO bound" thread above that limit automatically counts as a "normal" thread. For example, imagine a config of `max_threads: 5, max_io_threads: 10`: If there is 3 normal threads and 12 io threads, then puma won't spawn any new thread, but the next deferred thread that complete will become "normal" again so that we now have 4 normal threads and 11 deferred threads. Now if there is 5 normal threads and 2 IO threads, then when the next IO threads complete, it will exit.
edababf to
4734d01
Compare
|
My bad, I didn't fix the merge conflict correctly. Expecting it to go green now. |
|
Can someone comment on what happens when this gets merged? Does this magically make Actioncable::Live non blocking so it can be used with Puma without performance issues? |
No. I'll have to make changes on the Rails side first. |
Requested changes have been addressed
joshuay03
left a comment
There was a problem hiding this comment.
I have some nits, but I'll follow up separately.
Thank you!
|
😍 |
| # Any IO thread over the limit is counted as a regular thread, hence the above configuration also | ||
| # allows for 3 regular threads and 7 IO threads for example. | ||
| def max_io_threads(max) | ||
| max = Integer(max) |
There was a problem hiding this comment.
Now wondering if we should allow this to be unbounded. Maybe something to come back to once it's in use out in the wild, i.e. Rails.
There was a problem hiding this comment.
I don't think unbounded if ever a good idea. Even something that use very little CPU still use some, and it's always better to not accept quite as many concurrent requests as you could have (backpressure) than to accept too many.
@byroot this is AMAZING. Is there an issue open for the rails changes? Would this be a Rails 8.2 sort of thing? About to start building some chat interfaces, and most everyone in non rubyland is standardizing on SSE. Would love to not need falcon. |
Kinda, this change was driven by rails/rails#55132 (comment) I still need to find the time to figure out the Rails side of it. Ideally you'd just use
Yes.
I mean, you don't have to wait for Rails support. If you got this feature deployed you can use it directly with |
Fix: #3777
Workers can now be marked as "IO bound". This state last until the end of the current request.
At the end of the request, the pool decide whether the "IO bound" thread rejoin the normal population, or is abandonned.
Generally speaking, an "IO bound" thread no longer counts towards the
max_threadsconfiguration. However, there is amax_io_threadsconfiguration, and any "IO bound" thread above that limit automatically counts as a "normal" thread.For example, imagine a config of
max_threads: 5, max_io_threads: 10:If there is 3 normal threads and 12 io threads, then puma won't spawn any new thread, but the next deferred thread that complete will become "normal" again so that we now have 4 normal threads and 11 deferred threads.
Now if there is 5 normal threads and 2 IO threads, then when the next IO threads complete, it will exit.