Messages in this thread |  | | From | Andreas Dilger <> | Date | Wed, 17 Oct 2001 09:47:03 -0600 | Subject | Re: [Bench] New benchmark showing fileserver problem in 2.4.12 |
| |
On Oct 17, 2001 23:06 +1000, Robert Cohen wrote: > Factor 1: the performance problems only occur when you are rewriting an > existing file in place. That is writing to an existing file which is > opened without O_TRUNC. Equivalently, if you have written a file and > then seek'ed back to the beginning and started writing again. > > Evidence: in the report I posted yesterday, the test I was using > involved 5 clients rewriting 30 Meg files on a 128 Meg machine. The > symptom was that after about 10 seconds, the throughput as shown by > vmstat "bo" drops sharply and we start getting reads occuring as shown > by the "bi" figure.
Just a guess - if you are getting reads that are about the same as writes, then it would indicate that the code is doing "read-modify-write" for the existing file data rather than just "write". This would be caused by not writing only full-sized aligned blocks to the files.
As to why this is happening only over the network - it may be that you are are unable to send an even multiple of the blocksize over the network (MTU) and this is causing fragmented writes. Try using a smaller block size like 4k or so to see if it makes a difference.
Another possibility is that with 8k chunks you are needing order-1 allocations to receive the data and this is causing a lot of searching for buffers to free.
Cheers, Andreas -- Andreas Dilger \ "If a man ate a pound of pasta and a pound of antipasto, \ would they cancel out, leaving him still hungry?" http://www-mddsp.enel.ucalgary.ca/People/adilger/ -- Dogbert
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
|  |