lkml.org 
[lkml]   [2019]   [Jul]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 2/2] printk/panic/x86: Allow to access printk log buffer after crash_smp_send_stop()
On Thu 2019-07-18 20:29:54, Sergey Senozhatsky wrote:
> On (07/18/19 14:07), Konstantin Khlebnikov wrote:
> > > Let me test the waters. Criticize the following idea:
> > >
> > > Can we, sort of, disconnect "supposed to be dead" CPUs from printk()
> > > so then we can unconditionally re-init printk() from panic-CPU?
> > >
> > > We have per-CPU printk_state; so panic-CPU can set, let's say,
> > > DEAD_CPUS_TELL_NO_TALES bit on all CPUs but self, and vprintk_func()
> > > will do nothing if DEAD_CPUS_TELL_NO_TALES bit set on particular
> > > CPU. Foreign CPUs are not even supposed to be alive, and smp_send_stop()
> > > waits for IPI acks from secondary CPUs long enough on average (need
> > > to check that) so if one of the CPUs is misbehaving and doesn't want
> > > to die (geez...) we will just "disconnect" it from printk() to minimize
> > > possible logbuf/console drivers interventions and then proceed with
> > > panic; assuming that misbehaving CPUs are actually up to something
> > > sane. Sometimes, you know, in some cases, those CPUs are already dead:
> > > either accidentally powered off, or went completely nuts and do nothing,
> > > etc. etc. but we still can kdump() and console_flush_on_panic().
> >
> > Good idea.
> > Panic-CPU could just increment state to reroute printk into 'safe'
> > per-cpu buffer.
>
> Yeah, that's better.
>
> So we can do something like this
>
> @@ -269,15 +269,21 @@ void printk_safe_flush_on_panic(void)
> * Make sure that we could access the main ring buffer.
> * Do not risk a double release when more CPUs are up.
> */
> - if (raw_spin_is_locked(&logbuf_lock)) {
> - if (num_online_cpus() > 1)
> - return;
> + debug_locks_off();
> + raw_spin_lock_init(&logbuf_lock);

Hmm, the check for num_online_cpus() is there to avoid deadlock
caused be double unlock. The unconditional init would bring it back.

It is about what compromise we use. We either try to get the messages
out, init the lock and risk a deadlock. Or we do not risk the
deadlock.

The current code is not consistent. printk_safe_flush_on_panic()
is conservative and does not risk the deadlock. While kmsg_dump(),
console_unblank() and console_flush_on_panic() risk the deadlock.

The 1st patch tries to make it more consistent. It makes all
the above functions as conservative as printk_safe_flush_on_panic()
regarding logbuf_lock. While they still risk a deadlock with
console-specific locks.

The reasoning might be:

+ The code under logbuf_lock mostly just manipulates strings. There
are no nested locks. Infinite loops would most likely happen also
during normal operation. By other words, it is not easy to stay
locked in logbuf_lock (he, he, the last famous words).

+ On the other hand, a lot of code is done with disabled interrupts.
It is easier to imagine that a CPU could not get stopped by normal
interrupt because some deadlock or infinite loop in interrupt
context. It might even be a code calling console_unlock().
Re-initializing logbuf_lock might be unnecessary risk.

+ We are probably more relaxed with console specific locks because
most of them are ignored in panic after bust_spinlocks().


Anyway, this patchset is primary about logbuf_lock related deadlocks.
I needed a fix for our customers. I do not want to spend too much
time on upstreaming. IMHO, it is better to invest into the lockless
ring buffer.

If we could agree that the patches makes things better (more
consistent, more safe in kdump with enabled notifiers) and
the complexity is acceptable. Then let's accept them
(with some trivial improvements).

If they are too controversial, risky, or complex then
let's abandon them and concentrate on lockless ring buffer.

Best Regards,
Petr

\
 
 \ /
  Last update: 2019-07-19 14:19    [W:0.079 / U:0.020 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site