Messages in this thread |  | | Date | Tue, 09 Jan 2001 18:46:50 +0100 | From | Manfred Spraul <> | Subject | Re: [PLEASE-TESTME] Zerocopy networking patch, 2.4.0-1 |
| |
sct wrote: > We've already got measurements showing how insane this is. Raw IO > requests, plus internal pagebuf contiguous requests from XFS, have to > get broken down into page-sized chunks by the current ll_rw_block() > API, only to get reassembled by the make_request code. It's > *enormous* overhead, and the kiobuf-based disk IO code demonstrates > this clearly.
Stephen, I see one big difference between ll_rw_block and the proposed tcp_sendpage(): You must allocate and initialize a complete buffer head for each page you want to read, and then you pass the array of buffer heads to ll_rw_block with one function call. I'm certain the overhead is the allocation/initialization/freeing of the buffer heads, not the function call.
AFAICS the proposed tcp_sendpage interface is the other way around: you need one function call for each page, but no memory allocation/setup. The memory is allocated internally by the tcp_sendpage implementation, and it merges requests when possible, thus for a 9000 byte jumbopacket you'd need 3 function calls to tcp_sendpage(MSG_MORE), but only one skb is allocated and set up.
Ingo is that correct?
-- Manfred
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org Please read the FAQ at http://www.tux.org/lkml/
|  |