lkml.org 
[lkml]   [2009]   [Apr]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH 15/22] Do not disable interrupts in free_page_mlock()
On Fri, Apr 24, 2009 at 09:33:50AM +0900, KOSAKI Motohiro wrote:
> > > @@ -157,14 +157,9 @@ static inline void mlock_migrate_page(struct page *newpage, struct page *page)
> > > */
> > > static inline void free_page_mlock(struct page *page)
> > > {
> > > - if (unlikely(TestClearPageMlocked(page))) {
> > > - unsigned long flags;
> > > -
> > > - local_irq_save(flags);
> > > - __dec_zone_page_state(page, NR_MLOCK);
> > > - __count_vm_event(UNEVICTABLE_MLOCKFREED);
> > > - local_irq_restore(flags);
> > > - }
> > > + __ClearPageMlocked(page);
> > > + __dec_zone_page_state(page, NR_MLOCK);
> > > + __count_vm_event(UNEVICTABLE_MLOCKFREED);
> > > }
> >
> > The conscientuous reviewer runs around and checks for free_page_mlock()
> > callers in other .c files which might be affected.
> >
> > Only there are no such callers.
> >
> > The reviewer's job would be reduced if free_page_mlock() wasn't
> > needlessly placed in a header file!
>
> very sorry.
>
> How about this?
>
> =============================================
> Subject: [PATCH] move free_page_mlock() to page_alloc.c
>
> Currently, free_page_mlock() is only called from page_alloc.c.
> Thus, we can move it to page_alloc.c.
>

Looks good, but here is a version rebased on top of the patch series
where it would be easier to merge with "Do not disable interrupts in
free_page_mlock()".

I do note why it might be in the header though - it keeps all the
CONFIG_HAVE_MLOCKED_PAGE_BIT-related helper functions together making it
easier to find them. Lee, was that the intention?

=======
From: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>

Move free_page_mlock() from mm/internal.h to mm/page_alloc.c

Currently, free_page_mlock() is only called from page_alloc.c. This patch
moves it from a header to to page_alloc.c.

[mel@csn.ul.ie: Rebase on top of page allocator patches]
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
Cc: Christoph Lameter <cl@linux-foundation.org>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Dave Hansen <dave@linux.vnet.ibm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
---
mm/internal.h | 13 -------------
mm/page_alloc.c | 16 ++++++++++++++++
2 files changed, 16 insertions(+), 13 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index 58ec1bc..4b1672a 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -150,18 +150,6 @@ static inline void mlock_migrate_page(struct page *newpage, struct page *page)
}
}

-/*
- * free_page_mlock() -- clean up attempts to free and mlocked() page.
- * Page should not be on lru, so no need to fix that up.
- * free_pages_check() will verify...
- */
-static inline void free_page_mlock(struct page *page)
-{
- __ClearPageMlocked(page);
- __dec_zone_page_state(page, NR_MLOCK);
- __count_vm_event(UNEVICTABLE_MLOCKFREED);
-}
-
#else /* CONFIG_HAVE_MLOCKED_PAGE_BIT */
static inline int is_mlocked_vma(struct vm_area_struct *v, struct page *p)
{
@@ -170,7 +158,6 @@ static inline int is_mlocked_vma(struct vm_area_struct *v, struct page *p)
static inline void clear_page_mlock(struct page *page) { }
static inline void mlock_vma_page(struct page *page) { }
static inline void mlock_migrate_page(struct page *new, struct page *old) { }
-static inline void free_page_mlock(struct page *page) { }

#endif /* CONFIG_HAVE_MLOCKED_PAGE_BIT */

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index f08b4cb..3db5f57 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -433,6 +433,22 @@ static inline int page_is_buddy(struct page *page, struct page *buddy,
return 0;
}

+#ifdef CONFIG_HAVE_MLOCKED_PAGE_BIT
+/*
+ * free_page_mlock() -- clean up attempts to free and mlocked() page.
+ * Page should not be on lru, so no need to fix that up.
+ * free_pages_check() will verify...
+ */
+static inline void free_page_mlock(struct page *page)
+{
+ __ClearPageMlocked(page);
+ __dec_zone_page_state(page, NR_MLOCK);
+ __count_vm_event(UNEVICTABLE_MLOCKFREED);
+}
+#else
+static inline void free_page_mlock(struct page *page) { }
+#endif /* CONFIG_HAVE_MLOCKED_PAGE_BIT */
+
/*
* Freeing function for a buddy system allocator.
*
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab


\
 
 \ /
  Last update: 2009-04-24 13:35    [W:0.117 / U:27.092 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site