Thanks a lot, this was quite enlightening, like what "buffering in the interface" really means. A follow-up post would nice!
I'm also curious what preserve in rebase is used for in practice. Prevent drain from writing buffered bytes too eagerly? But if it can write more data in a single syscall, what benefits does it give you? If the end of the buffer could still change, then you wouldn't want to write it yet, but then drain would need a similar parameter to not touch the bytes at the end (and therefore write none of the data). But the bytes in the buffer would still move towards the front, so this makes no sense.
It's to keep buffered bytes buffered until the producer is ready for them to be dropped. This is relevant for example for decompression streams which keep a "window" of decompressed bytes around to refer to when matching. Of course, the implementation could use an internal ring buffer, but it's more efficient to use the output buffer directly instead.
I'm also curious what preserve in rebase is used for in practice. Prevent drain from writing buffered bytes too eagerly? But if it can write more data in a single syscall, what benefits does it give you? If the end of the buffer could still change, then you wouldn't want to write it yet, but then drain would need a similar parameter to not touch the bytes at the end (and therefore write none of the data). But the bytes in the buffer would still move towards the front, so this makes no sense.