Closed
Description
First, we don't care about/want malformed characters replacement that happens with new String(byte[], Charset)
/ StringCoding
because it's a great way to silently fail and never realize there's encoding issues.
Then, assuming above statement, Netty's ByteBuf.toString(Charset)
generates lots of garbage:
- It can only decode one single
ByteBuf
so if we have several of them (chunks), we have to wrap them intoCompositeByteBuf
. - As it wants to enable replacement, when dealing multiple underlying NIO
ByteBuffer
, it has to merge/copy them into a singleByteBuffer
(which is/can be pooled).
In the case of US-ASCII and UTF-8, we can avoid this copy and use the CorderResult
status to detect char split between several buffers and analyze the first rejected byte to determine how many bytes we need to complete the char.