lkml.org 
[lkml]   [2021]   [Jun]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH next v1 1/2] dump_stack: move cpu lock to printk.c
On Mon 2021-05-31 18:20:50, John Ogness wrote:
> dump_stack() implements its own cpu-reentrant spinning lock to
> best-effort serialize stack traces in the printk log. However,
> there are other functions (such as show_regs()) that can also
> benefit from this serialization.
>
> Move the cpu-reentrant spinning lock (cpu lock) into new helper
> functions printk_cpu_lock()/printk_cpu_unlock() so that it is
> available for others as well. For !CONFIG_PRINTK or !CONFIG_SMP
> the cpu lock is a NOP.
>
> Note that having multiple cpu locks in the system can easily
> lead to deadlock. Code needing a cpu lock should use the
> printk cpu lock, since the printk cpu lock could be acquired
> from any code and any context.
>
> diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
> index 114e9963f903..98feead621ff 100644
> --- a/kernel/printk/printk.c
> +++ b/kernel/printk/printk.c
> @@ -3531,4 +3531,96 @@ void kmsg_dump_rewind(struct kmsg_dump_iter *iter)
> }
> EXPORT_SYMBOL_GPL(kmsg_dump_rewind);
>
> +#ifdef CONFIG_SMP
> +static atomic_t printk_cpulock_owner = ATOMIC_INIT(-1);
> +
> +/*
> + * printk_cpu_lock: Acquire the printk cpu-reentrant spinning lock.
> + * @cpu_store: A buffer to store lock state.
> + * @flags: A buffer to store irq state.
> + *
> + * If no processor has the lock, the calling processor takes the lock and
> + * becomes the owner. If the calling processor is already the owner of the
> + * lock, this function succeeds immediately. If the lock is locked by another
> + * processor, that function spins until the calling processor becomes the
> + * owner.
> + *
> + * It is safe to call this function from any context and state.
> + */
> +void printk_cpu_lock(unsigned int *cpu_store, unsigned long *flags)

I think about calling this printk_cpu_lock_irqsave() to make it clear
that it disables interrupts.

Strictly speaking, it should be enough to disable preemption. If it is
safe when interrupted by NMI, it must be safe also when interrupted
by a normal interrupt.

I guess that the interrupts are disabled because it reduces the risk
of nested (messed) backtraces.

Anyway, I would keep the current approach (disabled irqs) unless we
have a good reason to change it. Well, enabled irqs might be better
for RT.

Best Regards,
Petr

\
 
 \ /
  Last update: 2021-06-01 15:59    [W:1.806 / U:0.576 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site