lkml.org 
[lkml]   [2009]   [Feb]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH] mm: disable preemption in apply_to_pte_range
Jeremy Fitzhardinge wrote:
> commit 79d9c90453a7bc9e7613ae889a97ff6b44ab8380

Scratch that. This instead.
J

mm: disable preemption in apply_to_pte_range

Lazy mmu mode needs preemption disabled, so if we're apply to
init_mm (which doesn't require any pte locks), then explicitly
disable preemption. (Do it unconditionally after checking we've
successfully done the allocation to simplify the error handling.)

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>

diff --git a/mm/memory.c b/mm/memory.c
index baa999e..b80cc31 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1718,6 +1718,7 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd,

BUG_ON(pmd_huge(*pmd));

+ preempt_disable();
arch_enter_lazy_mmu_mode();

token = pmd_pgtable(*pmd);
@@ -1729,6 +1730,7 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd,
} while (pte++, addr += PAGE_SIZE, addr != end);

arch_leave_lazy_mmu_mode();
+ preempt_enable();

if (mm != &init_mm)
pte_unmap_unlock(pte-1, ptl);



\
 
 \ /
  Last update: 2009-02-13 01:37    [W:0.435 / U:0.504 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site