lkml.org 
[lkml]   [2013]   [Sep]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[PATCH][mmotm] mm/mempolicy.c: add check to avoid queuing hugepage under migration
queue_pages_pmd_range() checks pmd_huge() to find hugepage, but this check
assumes the pmd is in the normal format and does not work on migration entry
whoes format is like swap entry. We can distinguish them with present bit,
so we need to check it before cheking pmd_huge(). Otherwise, pmd_huge() can
wrongly return false for hugepage, and the behavior is unpredictable.

This patch is against mmotm-2013-08-27.

Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
---
mm/mempolicy.c | 2 ++
1 file changed, 2 insertions(+)

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 64d00c4..0472964 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -553,6 +553,8 @@ static inline int queue_pages_pmd_range(struct vm_area_struct *vma, pud_t *pud,
pmd = pmd_offset(pud, addr);
do {
next = pmd_addr_end(addr, end);
+ if (!pmd_present(*pmd))
+ continue;
if (pmd_huge(*pmd) && is_vm_hugetlb_page(vma)) {
queue_pages_hugetlb_pmd_range(vma, pmd, nodes,
flags, private);
--
1.8.3.1

\
 
 \ /
  Last update: 2013-09-11 04:21    [W:0.186 / U:0.436 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site