lkml.org 
[lkml]   [2008]   [Jun]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [patch 7/7] powerpc: lockless get_user_pages_fast
On Tue, Jun 10, 2008 at 12:00:48PM -0700, Christoph Lameter wrote:
> On Thu, 5 Jun 2008, npiggin@suse.de wrote:
>
> > Index: linux-2.6/include/linux/mm.h
> > ===================================================================
> > --- linux-2.6.orig/include/linux/mm.h
> > +++ linux-2.6/include/linux/mm.h
> > @@ -244,7 +244,7 @@ static inline int put_page_testzero(stru
> > */
> > static inline int get_page_unless_zero(struct page *page)
> > {
> > - VM_BUG_ON(PageTail(page));
> > + VM_BUG_ON(PageCompound(page));
> > return atomic_inc_not_zero(&page->_count);
> > }
>
> This is reversing the modification to make get_page_unless_zero() usable
> with compound page heads. Will break the slab defrag patchset.

Is the slab defrag patchset in -mm? Because you ignored my comment about
this change that assertions should not be weakened until required by the
actual patchset. I wanted to have these assertions be as strong as
possible for the lockless pagecache patchset.




\
 
 \ /
  Last update: 2008-06-11 05:21    [W:1.161 / U:0.136 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site