-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Netty HttpProtocol.finishUpdate should offer channel back to pool after markAsDone #844
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Thanks! |
@maxpower-artvandelay I was too hasty in merging this. It causes other issues and the build to fail. Mostly, it causes to open too many connections when the keep-alive connections are not offered back to the pool fast enough. Could you elaborate on what you observed, please? |
Hey, Take a look at the example here: I make a couple non-blocking calls to a url and then make a subsequent blocking call using get() when onCompleted is called. There seems to be always 1 request that seems to comeback with a timeout. Wireshark seems to point that there is a response to the request. The number of requests seems to vary by machine. You may need to increase the 'numRequests' count if its not seen. |
Your test is broken. You're performing blocking calls in the IO thread. As you're using the channel pool and Netty definitively bind a channel to a thread, there's a chance you pick a channel from the pool that's bound to the very same thread you're running on and blocking, hence a deadlock that will only be lifted by a timeout. Either do only non blocking calls, or hand over request sending to another dedicated thread pool/executor service. |
Aah yes, you are correct. Thanks for the quick reply! |
HttpProtocol.finishUpdate attempts to put the channel back into the pool then calls markAsDone. This permits a case were the same channel is selected in the callback which can lead to a "read timeout".
The text was updated successfully, but these errors were encountered: