lkml.org 
[lkml]   [2003]   [Aug]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: cache limit
Mike Fedyk wrote:
>
> That was because they wanted the non-streaming files to be left in the cache.
>
>> I will try to produce some benchmarktings tomorrow with different
>>'mem=%dMB'. I'm afraid to confirm that it will make difference.
>> But in advance: mantainance of page tables for 1GB and for 128MB of
>>RAM are going to make a difference.
>
> I'm sorry to say, but you *will* get lower performance if you lower the mem=
> value below your working set. This will also lower the total amount of
> memory available for your applications, and force your apps, to swap and
> balance cache, and app memory.
>
> That's not what you are looking to benchmark.
>

Okay. I'm completely puzzled.
I will qute here only one test - and I really do not understand this
stuff.

Three boots with the same parameters and only mem=nMB, n =
{512,256,128} (I have 512MB RAM)

hdparm tests:
[root@hera ifilipau]# hdparm -t /dev/hda
/dev/hda:
Timing buffered disk reads: 64 MB in 1.56 seconds = 41.03 MB/sec
[root@hera ifilipau]# hdparm -T /dev/hda
/dev/hda:
Timing buffer-cache reads: 128 MB in 0.44 seconds =290.91 MB/sec
[root@hera ifilipau]#

Before tests I was doing 'swapoff -a; sync'
RedHat's 2.4.20-20.9 kernel.

What has really puzzled me.
Operation: "cat *.bz2 >big_file", where *.bz2 is just two bzipped
kernels. Total size: 29MB+32MB (2.4.22 + 2.6.0-test1)

To be bsolutely fair in this unfair benchmark I have run test only
once. Times in seconds as shown by bash's time.

cat sync
512MB: 1.565 0.007
256MB: 1.649 0.008
128MB: 2.184 0.007

Kill me - shoot me, but how it can be?
Resulting file fits RAM.
Not hard to guess that source files, which no one cares about already
- are still hanging in the RAM...

That's not right: as long as resulting file fits memory - and it fits
memory in all (512MB, 256MB, 128MB) cases - this operation should take
the _same_ time. (Actually before 128MB test, vmstat was saying that I
have +70MB of free non-touched memory)

So resume is quite simple: kernel loses *terribly* much time
resorting read()s against write()s. Way _too_ _much_ time.

I will try to download RedHat's AS kernel and play with page-cache.
After all: if RH has included that feature in their kernels - that
means it really make sense ;-)))

--
Ihar 'Philips' Filipau / with best regards from Saarbruecken.
- - - - - - - - - - - - - - - - - - - -
* Please avoid sending me Word/PowerPoint/Excel attachments.
* See http://www.fsf.org/philosophy/no-word-attachments.html
- - - - - - - - - - - - - - - - - - - -
There should be some SCO's source code in Linux -
my servers sometimes are crashing. -- People

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:48    [W:0.056 / U:0.048 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site