Skip to content

Keep-alive connection not kept on 301 w/ host change #1268

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
MikeN123 opened this issue Oct 8, 2016 · 3 comments
Closed

Keep-alive connection not kept on 301 w/ host change #1268

MikeN123 opened this issue Oct 8, 2016 · 3 comments
Assignees
Labels
Milestone

Comments

@MikeN123
Copy link

MikeN123 commented Oct 8, 2016

Redirect30xInterceptor tries to reuse a channel on a redirect if keep-alive is enabled, and tries to offer the channel to the pool if that is not possible due to a host change, see here:

channelManager.drainChannelAndOffer(channel, future, initialConnectionKeepAlive, initialPartitionKey);

Unfortunately, that does not work, as NettyRequestSender.sendNextRequest() immediately clears the Channel attribute, which was a fix for another issue (#1059):

Not sure about the correct fix, still trying to figure that out.

MikeN123 pushed a commit to MikeN123/async-http-client that referenced this issue Oct 8, 2016
MikeN123 pushed a commit to MikeN123/async-http-client that referenced this issue Oct 8, 2016
MikeN123 pushed a commit to MikeN123/async-http-client that referenced this issue Oct 8, 2016
@slandelle slandelle added Defect and removed Defect labels Oct 12, 2016
@slandelle
Copy link
Contributor

Actually, your understanding is not correct. What's nulled out is the channel's attribute. The future's channel is still here.
Please provide a reproducer if you experience an issue.

@MikeN123
Copy link
Author

Uhm, I don't think you get the issue. The problem is not the future's channel, the problem is the channel's attribute being nulled out.

I don't have a full reproduction available, but it is easily seen from the source. Working from Redirect30xInterceptor.

  • The connection is redirected to a different host, hence sameBase = false
  • channelManager.drainChannelAndOffer() is called, to drain the channel and send it to the pool (as expected, as the channel can still be used and is kept alive)
  • drainChannelAndOffer() creates a new callback, using newDrainCallback(). This callback's task is simply to put the channel back in the pool when the remaining data has been received (i.e., when LastContent is received).
  • This callback is set as the channel's attribute:
    Channels.setAttribute(channel, newDrainCallback(future, channel, keepAlive, partitionKey));
  • The Redirect30xInterceptor then continues, calls NettyRequestSender.sendNextRequest(), which immediately clears the attribute. Hence the drain callback is gone.
  • Somewhere in the future, the LastContent is received, we end up in AsyncHttpClientHandler. channelRead(), which reads the attribute, finds null, and ends up with a message about an orphan channel, and the channel is closed:
    logger.debug("Orphan channel {} with attribute {} received message {}, closing", channel, attribute, msg);

Hope that helps.

@slandelle slandelle added this to the 2.0.19 milestone Oct 17, 2016
@slandelle
Copy link
Contributor

Sorry, I didn't get your original "that does not work" :)
If think the buggy code is no longer necessary, I removed it.

Thanks for reporting!

@slandelle slandelle self-assigned this Oct 17, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants