lkml.org 
[lkml]   [2007]   [Jul]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: RFC: CONFIG_PAGE_SHIFT (aka software PAGE_SIZE)
On Fri, Jul 13, 2007 at 04:31:09PM +0200, Andrea Arcangeli wrote:
> On Fri, Jul 13, 2007 at 05:13:08PM +1000, David Chinner wrote:
> > Sure. Fundamentally, though, I think it is the wrong approach to
> > take - it's a workaround for a big negative side effect of
> > increasing page size. It introduces lots of complexity and
> > difficult-to-test corner cases; judging by the tail packing problems
> > reiser3 has had over the years, it has the potential to be a
> > never-ending source of data corruption bugs.
> >
> > I think that fine granularity and aggregation for efficiency of
> > scale is a better model to use than increasing the base page size.
> > With PPC, you can handle different page sizes in the hardware (like
> > MIPS) and the use of 64k base page size is an obvious workaround to
> > the problem of not being able to use multiple page sizes within the
> > OS.
>
> I think you're being too fs centric. Moving only the pagecache to a
> large order is enough to you but it isn't enough to me, I'd like all
> allocations to be faster, and I'd like to reduce the page fault
> rate.

Right, and that is done on other operating systems by supporting
multiple hardware page sizes and telling the relevant applications to
use larger pages (e.g. via cpuset configuration).

> The CONFIG_PAGE_SHIFT isn't just about I/O. It's just that
> CONFIG_PAGE_SHIFT will give you the I/O side for free too.

It's not for free, and that's one of the points I've been trying
to make.

> Also keep in mind mixing multiple page sizes for different inodes has
> the potential to screw the aging algorithms in the reclaim code. Just
> to make an example during real random I/O over all bits of hot cache
> in pagecache, a 64k page has 16 times more probability of being marked
> young than a 4k page.

Sure, but if a page is being hit repeatedly - regardless of it's
size - then you want to keep it around....

> The tail packing of pagecache could very well be worth it. It should
> cost nothing for the large files.

As I've said before - I'm not just concerned with large files - I'm
also concerned about large numbers of files (hundreds of millions to
billions in a filesystem) and the scalability issues involved with
them. IOWs, I'm looking at metadata scalability as much as data
scalability.

It's flexibility that I need from the VM, not pure VM efficiency.
Shifting the base page size is not an efficient solution to the
different aspects of filesystem scalability. We've got to deal with
both ends of the spectrum simultaneously on the one machine in the
same filesystem and it's only going to get worse in the future.....

Cheers,

Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2007-07-16 02:29    [W:0.469 / U:0.524 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site