lkml.org 
[lkml]   [2003]   [Jun]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] io stalls


Chris Mason wrote:

>On Wed, 2003-06-11 at 23:20, Nick Piggin wrote:
>
>
>>I think the cpu utilization gain of waking a number of tasks
>>at once would be outweighed by advantage of waking 1 task
>>and not putting it to sleep again for a number of requests.
>>You obviously are not claiming concurrency improvements, as
>>your method would also increase contention on the io lock
>>(or the queue lock in 2.5).
>>
>
>I've been trying variations on this for a few days, none have been
>thrilling but the end result is better dbench and iozone throughput
>overall. For the 20 writer iozone test, rc7 got an average throughput
>of 3MB/s, and yesterdays latency patch got 500k/s or so. Ouch.
>
>This gets us up to 1.2MB/s. I'm keeping yesterday's
>get_request_wait_wake, which wakes up a waiter instead of unplugging.
>
>The basic idea here is that after a process is woken up and grabs a
>request, he becomes the batch owner. Batch owners get to ignore the
>q->full flag for either 1/5 second or 32 requests, whichever comes
>first. The timer part is an attempt at preventing memory pressure
>writers (who go 1 req at a time) from holding onto batch ownership for
>too long. Latency stats after dbench 50:
>

Yeah, I get ~50% more throughput and up to 20% better CPU
efficiency on tiobench 256 for sequential and random
writers by doing something similar.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:36    [W:0.631 / U:0.044 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site