lkml.org 
[lkml]   [2002]   [Jan]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [2.4.17/18pre] VM and swap - it's really unusable
Hi,

yodaiken@fsmlabs.com wrote:

> > don't you? As for the measured benefit, there have been a steady stream of
> > postive reports on lkml.
>
> I have not seen a single well structured benchmark that shows a significant
> difference. I've seen lots of benchmarks with odd mixes of different patches
> showing something unknown. How about a simple clear dbench?

So let's explore the benchmark argument a bit. A usual some things need
to be clarified first.
It would be useful to know what we actually want to measure in the
benchmark. The main goal is to reduce scheduling latencies, so it would
make sense to test this. Results can be found at
http://www.gardena.net/benno/linux/audio/ and
http://kpreempt.sourceforge.net/. I haven't found a direct compare of
preemp+lockbreak and ll patch, but I wouldn't expect major differences
and it isn't really important. It is only important that both approaches
improve the scheduling latency considerably.
Now what in this thread is mostly mentioned are i/o benchmarks.
Improving i/o performance isn't really the main goal of the patches, so
it's only important that i/o performance isn't harmed. So far I haven't
seen any results indicating something like this, so case closed.
Victor, could you please explain, what are you trying to prove with the
benchmark argument??? I see you arguing a lot, but I don't really see
for what.
Finally my theory, why preempt performs better. If we assume that the
kernel spends too much time in kernel space, we only need to look at the
ll patch for possible latency problems. My favourites are two places -
copy_(to|from)_user and page table scan. Both are likely places for
loads like grep or kernel compile. I can see two possible effects here.
First lots of small i/o can be issued faster instead copying data
around. Second we spent too much time scanning the page tables looking
for freeable pages, while already enough i/o is scheduled, so instead
something useful is done and we just wait for the i/o to finish. Anyway,
anyone who really wants to know it, could modify the profiling code and
test where in kernel we schedule so often.

bye, Roman
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:15    [W:0.239 / U:0.464 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site