lkml.org 
[lkml]   [2015]   [Sep]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH 07/11] x86/mm: Remove pgd_list use from vmalloc_sync_all()
Date
The vmalloc() code uses vmalloc_sync_all() to synchronize changes to
the global reference kernel PGD to task PGDs in certain rare cases,
like register_die_notifier().

This use seems to be somewhat questionable, as most other vmalloc
page table fixups are vmalloc_fault() driven, but nevertheless
it's there and it's using the pgd_list.

But we don't need the global list, as we can walk the task list
under RCU.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Waiman Long <Waiman.Long@hp.com>
Cc: linux-mm@kvack.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
arch/x86/mm/fault.c | 29 ++++++++++++++++++++++-------
1 file changed, 22 insertions(+), 7 deletions(-)

diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index f890f5463ac1..9322d5ad3811 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -14,6 +14,7 @@
#include <linux/prefetch.h> /* prefetchw */
#include <linux/context_tracking.h> /* exception_enter(), ... */
#include <linux/uaccess.h> /* faulthandler_disabled() */
+#include <linux/oom.h> /* find_lock_task_mm(), ... */

#include <asm/traps.h> /* dotraplinkage, ... */
#include <asm/pgalloc.h> /* pgd_*(), ... */
@@ -237,24 +238,38 @@ void vmalloc_sync_all(void)
for (address = VMALLOC_START & PMD_MASK;
address >= TASK_SIZE && address < FIXADDR_TOP;
address += PMD_SIZE) {
- struct page *page;

+ struct task_struct *g;
+
+ rcu_read_lock(); /* Task list walk */
spin_lock(&pgd_lock);
- list_for_each_entry(page, &pgd_list, lru) {
+
+ for_each_process(g) {
+ struct task_struct *p;
+ struct mm_struct *mm;
spinlock_t *pgt_lock;
- pmd_t *ret;
+ pmd_t *pmd_ret;
+
+ p = find_lock_task_mm(g);
+ if (!p)
+ continue;

- /* the pgt_lock only for Xen */
- pgt_lock = &pgd_page_get_mm(page)->page_table_lock;
+ mm = p->mm;

+ /* The pgt_lock is only used on Xen: */
+ pgt_lock = &mm->page_table_lock;
spin_lock(pgt_lock);
- ret = vmalloc_sync_one(page_address(page), address);
+ pmd_ret = vmalloc_sync_one(mm->pgd, address);
spin_unlock(pgt_lock);

- if (!ret)
+ task_unlock(p);
+
+ if (!pmd_ret)
break;
}
+
spin_unlock(&pgd_lock);
+ rcu_read_unlock();
}
}

--
2.1.4


\
 
 \ /
  Last update: 2015-09-22 08:41    [W:0.170 / U:1.008 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site