As tested on Linux:
- An SCM_RIGHTS ancillary message is "attached" to the range of data bytes sent in the same sendmsg() call.
- However, as always, recvmsg() calls on the receiving end don't necessarily map 1:1 to sendmsg() calls. Messages can be coalesced or split.
- The recvmsg() call that receives the first byte of the ancillary message's byte range also receives the ancillary message itself.
- To prevent multiple ancillary messages being delivered at once, the recvmsg() call that receives the ancillary data will be artifically limited to read no further than the last byte in the range, even if more data is available in the buffer after that byte, and even if that later data is not actually associated with any ancillary message.
- However, if the recvmsg() that received the first byte does not provide enough buffer space to read the whole message, the next recvmsg() will be allowed to read past the end of the mesage range and even into a new ancillary message's range, returning the ancillary data for the later message.
- Regular read()s will show the same pattern of potentially ending early even though they cannot receive ancillary messages at all. This can mess things up when using edge triggered I/O if you assumed that a short read() indicates no more data is available.
- A single SCM_RIGHTS message may contain up to SCM_MAX_FD (253) file descriptors.
- If the recvmsg() does not provide enough ancillary buffer space to fit the whole descriptor array, it will be truncated to fit, with the remaining descriptors being discarded and closed. You cannot split the list over multiple calls.
Agreed. But I still want to test it. After all, bugs happens. For instance:
I really really doubt we have this bug anyway.
Yes.
It's not really a kernel-managed thread. It's more like a kernel-managed thread-pool.
I do believe it's better to hang a kernel thread than to hang an userspace thread. It means the user program can be made single-threaded. How many threads do we need? If you're doing threads purely to exploit IO concurrency, your application has no better knowledge than the kernel to know how many threads it should be spawning.
Also we had very specific cases where kernel AIO would work for certain combinations of kernel drivers and filesystems. A state machine would be another valid approach to implement the IO operation within the kernel. The fact it currently uses threads is just an implementation detail.
A thread blocking on a single IO operation doesn't equal to a system under full load that can't accept new IO requests. The correct error condition should be propagated, and io_uring has that (submission queue full).