lkml.org 
[lkml]   [2001]   [Aug]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [resent PATCH] Re: very slow parallel read performance


On Fri, 24 Aug 2001, Marc wrote:

> On Fri, Aug 24, 2001 at 09:35:08AM +0200, Roger Larsson <roger.larsson@skelleftea.mail.telia.com> wrote:
> > And I found out that read ahead was too short for modern disks.
>
> That could well be, the problem in my case is that, with up to 1000
> clients, I fear that there might not be enough memory for effective
> read-ahead (and I think read-ahead would be counter-productive even).

Depend on the amount of memory.
If asynchronous read-ahead is working for 1000 sequential IO streams on
1000 different files with MAX_READAHEAD=31, your system needs about:

(32+32) * 1 PAGE * 1000 = 256 MB

just for read-ahead data.

> > line is the
> > -#define MAX_READAHEAD 31
> > +#define MAX_READAHEAD 511
>
> I plan to try this, however, read-ahead should IMHO be zero anyway, since
> there simply is ot enough memory, and the kernel should not do much
> read-ahead when many other requests are outstanding.

I donnot recommend you to try this value, even if the read-ahead code may
be smart enough to detect trashing and use an average value more
reasonnable.

> The real problem., however, is that read performance sinks so much when many
> readers run in parallel.
>
> What I need is many parallel reads because this helps the elevator scan the
> disk once and not jump around widely)
>
> (I have 512MB memory around 64k socket send buffer and use an additional
> 96k buffer currently for reading from disk, so effectively i do my own
> read-ahead anyway. IU just need to optimize the head movements).

The asynchronous read-ahead code tries to eliminate IO latency by starting
the next IO in advance. This is probably not useful for the situation you
describe. Optimizing the head movements in indeed the smartest thing to
try given the IO pattern you describe.

In my opinion, your system is probably trashing a lot:

Buffers : (256K + 64K + 96K) * 1000 = 416 MB. Code and various datas
(notably kernel socket datas): probably far more than 100 MB.
Total greater than 512 MB.

May-be you should either:

- Use a smaller number of clients.

or

- Increase memory size up to 1 GB, for example.

or

- Use smaller buffers, for example:
MAX_READAHEAD=15
32K file buffer
32K socket buffer

--

Regards,
Gérard.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 12:58    [W:2.194 / U:0.148 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site