lkml.org 
[lkml]   [2007]   [May]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [rfc] increase struct page size?!
    On Fri, May 18, 2007 at 12:43:04AM -0700, Andrew Morton wrote:
    > On Fri, 18 May 2007 09:32:23 +0200 Nick Piggin <npiggin@suse.de> wrote:
    >
    > > On Fri, May 18, 2007 at 12:19:05AM -0700, Andrew Morton wrote:
    > > > On Fri, 18 May 2007 06:08:54 +0200 Nick Piggin <npiggin@suse.de> wrote:
    > > >
    > > > > Many batch operations on struct page are completely random,
    > > >
    > > > But they shouldn't be: we should aim to place physically contiguous pages
    > > > into logically contiguous pagecache slots, for all the reasons we
    > > > discussed.
    > >
    > > For big IO batch operations, pagecache would be more likely to be
    > > physically contiguous, as would LRU, I suppose.
    >
    > read(), write(), truncate(), writeback, pagefault. Pretty common stuff.

    Of course, but if you're doing them on random-ish ranges, or multiple
    files, or continually on the same file while parts of it get reclaimed
    and reinstantiated...


    > > I'm more thinking of operations where things get reclaimed over time,
    > > touched or dirtied in slightly different orderings, interleaved with
    > > other allocations, etc.
    >
    > Yes, that can happen. But in such cases we by definition aren't touching
    > the pageframes very often. I'd assert that when the kernel is really
    > hitting those pageframes hard, it is commonly doing this in ascending
    > pagecache order.

    I'm not sure that I would always agree. Sure if there is random *IO*
    involved, then it is going to be slow and we won't be hitting page frames
    so hard. But if there is random pagecache access, it will hit them almost
    as hard.


    > > > If/when that happens, there will be a *lot* of locality of reference
    > > > against the pageframes in a lot of important codepaths.
    > >
    > > And when it doesn't happen, we eat 75% more cache misses. And for that
    > > matter we eat 75% more cache misses for non-batch operations like
    > > allocating or freeing a page by slab, for example.
    >
    > "measure twice, cut once"

    Definitely agree there.

    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2007-05-18 10:03    [W:2.518 / U:0.004 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site