lkml.org 
[lkml]   [2021]   [Jun]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH v4 2/4] lazy tlb: allow lazy tlb mm refcounting to be configurable
Excerpts from Nicholas Piggin's message of June 16, 2021 11:02 am:
> Excerpts from Andy Lutomirski's message of June 16, 2021 10:14 am:
>> akpm, please drop this series until it's fixed. It's a core change to
>> better support arch usecases, but it's unnecessarily fragile, and there
>> is already an arch maintainer pointing out that it's inadequate to
>> robustly support arch usecases. There is no reason to merge it in its
>> present state.

Just to make sure I'm not doing anything stupid or fragile for other
archs, I had a closer look at a few. sparc32 is the only one I have a
SMP capable qemu and initramfs at hand for, took about 5 minutes to
convert after fixing 2 other sparc32/mm bugs (patches on linux-sparc),
one of them found by the DEBUG_VM code my series added. It seems to work
fine, with what little stressing my qemu setup can muster.

Simple. Robust. Pretty mechanical conversion follows the documented
reciple. Re-uses every single line of code I added outside
arch/powerpc/. Requires no elaborate dances.

alpha and arm64 are both 4-liners by the looks, sparc64 might reqiure a
bit of actual code but doesn't look too hard.

So I'm satisfied the code added outside arch/powerpc/ is not some
fragile powerpc specific hack. I don't know if other archs will use
it, but they easily can use it[*].

And we can make changes to help x86 whenever its needed -- I already
posted patch 1/n for configuring out lazy tlb and active_mm from core
code rebased on top of mmotm so the series is not preventing such
changes.

Hopefully this allays some concerns.

[*] I do think mmgrab_lazy_tlb is a nice change that self-documents the
active_mm refcounting, so I will try to get all the arch code
converted to use it over the next few releases, even if they never
switch to use lazy tlb shootdown.

Thanks,
Nick

---
arch/sparc/Kconfig | 1 +
arch/sparc/kernel/leon_smp.c | 2 +-
arch/sparc/kernel/smp_64.c | 2 +-
arch/sparc/kernel/sun4d_smp.c | 2 +-
arch/sparc/kernel/sun4m_smp.c | 2 +-
arch/sparc/kernel/traps_32.c | 2 +-
arch/sparc/kernel/traps_64.c | 2 +-
7 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
index 164a5254c91c..db9954af57a2 100644
--- a/arch/sparc/Kconfig
+++ b/arch/sparc/Kconfig
@@ -58,6 +58,7 @@ config SPARC32
select GENERIC_ATOMIC64
select CLZ_TAB
select HAVE_UID16
+ select MMU_LAZY_TLB_SHOOTDOWN
select OLD_SIGACTION

config SPARC64
diff --git a/arch/sparc/kernel/leon_smp.c b/arch/sparc/kernel/leon_smp.c
index 1eed26d423fb..d00460788048 100644
--- a/arch/sparc/kernel/leon_smp.c
+++ b/arch/sparc/kernel/leon_smp.c
@@ -92,7 +92,7 @@ void leon_cpu_pre_online(void *arg)
: "memory" /* paranoid */);

/* Attach to the address space of init_task. */
- mmgrab(&init_mm);
+ mmgrab_lazy_tlb(&init_mm);
current->active_mm = &init_mm;

while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c
index e38d8bf454e8..19aa12991f2b 100644
--- a/arch/sparc/kernel/smp_64.c
+++ b/arch/sparc/kernel/smp_64.c
@@ -127,7 +127,7 @@ void smp_callin(void)
current_thread_info()->new_child = 0;

/* Attach to the address space of init_task. */
- mmgrab(&init_mm);
+ mmgrab_lazy_tlb(&init_mm);
current->active_mm = &init_mm;

/* inform the notifiers about the new cpu */
diff --git a/arch/sparc/kernel/sun4d_smp.c b/arch/sparc/kernel/sun4d_smp.c
index ff30f03beb7c..a6f392dcfeaf 100644
--- a/arch/sparc/kernel/sun4d_smp.c
+++ b/arch/sparc/kernel/sun4d_smp.c
@@ -94,7 +94,7 @@ void sun4d_cpu_pre_online(void *arg)
show_leds(cpuid);

/* Attach to the address space of init_task. */
- mmgrab(&init_mm);
+ mmgrab_lazy_tlb(&init_mm);
current->active_mm = &init_mm;

local_ops->cache_all();
diff --git a/arch/sparc/kernel/sun4m_smp.c b/arch/sparc/kernel/sun4m_smp.c
index 228a6527082d..0ee77f066c9e 100644
--- a/arch/sparc/kernel/sun4m_smp.c
+++ b/arch/sparc/kernel/sun4m_smp.c
@@ -60,7 +60,7 @@ void sun4m_cpu_pre_online(void *arg)
: "memory" /* paranoid */);

/* Attach to the address space of init_task. */
- mmgrab(&init_mm);
+ mmgrab_lazy_tlb(&init_mm);
current->active_mm = &init_mm;

while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
diff --git a/arch/sparc/kernel/traps_32.c b/arch/sparc/kernel/traps_32.c
index 247a0d9683b2..a3186bb30109 100644
--- a/arch/sparc/kernel/traps_32.c
+++ b/arch/sparc/kernel/traps_32.c
@@ -387,7 +387,7 @@ void trap_init(void)
thread_info_offsets_are_bolixed_pete();

/* Attach to the address space of init_task. */
- mmgrab(&init_mm);
+ mmgrab_lazy_tlb(&init_mm);
current->active_mm = &init_mm;

/* NOTE: Other cpus have this done as they are started
diff --git a/arch/sparc/kernel/traps_64.c b/arch/sparc/kernel/traps_64.c
index a850dccd78ea..b6e46732fa69 100644
--- a/arch/sparc/kernel/traps_64.c
+++ b/arch/sparc/kernel/traps_64.c
@@ -2929,6 +2929,6 @@ void __init trap_init(void)
/* Attach to the address space of init_task. On SMP we
* do this in smp.c:smp_callin for other cpus.
*/
- mmgrab(&init_mm);
+ mmgrab_lazy_tlb(&init_mm);
current->active_mm = &init_mm;
}
--
2.23.0
\
 
 \ /
  Last update: 2021-06-17 02:32    [W:8.355 / U:0.024 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site