lkml.org 
[lkml]   [2012]   [Jan]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: [RFC][PATCH v4 -next 1/4] Move kmsg_dump(KMSG_DUMP_PANIC) below smp_send_stop()
Date
> Do you have any comments?

I'm stuck in because I don't know how assign probabilities to
the failure cases with kmsg_dump() before and after the smp_send_stop().

There's a well documented tendency in humans to stick with the status
quo in such situations. I'm definitely finding it hard to provide
a positive recommendation (ACK).

So I'll just talk out loud here for a bit in case someone sees
something obviously flawed in my understanding.

Problem statement: We'd like to maximize our chances of saving the
tail of the kernel log when the system goes down. With the current
ordering there is a concern that other cpus will interfere with the
one that is saving the log.

Problems in current code flow:
*) Other cpus might hold locks that we need. Our options are to fail,
or to "bust" the locks (but busting the locks may lead to other
problems in the code path - those locks were there for a reason).
There are only a couple of ways that this could be an issue.
1) The lock is held because someone is doing some other pstore
filesystem operation (reading and erasing records). This has a
very low probability. Normal code flow will have some process harvest
records from pstore in some /etc/rc.d/* script - this process should
take much less than a second.
2) The lock is held because some other kmsg_dump() store is in progress.
This one seems more worrying - think of an OOPS (or several) right
before we panic

Problems in proposed code flow:
*) smp_send_stop() fails:
1) doesn't actually stop other cpus (we are no worse off than before we
made this change)
2) doesn't return - so we don't even try to dump to pstore back end. x86
code has recently been hardened (though I can still imagine a pathological
case where in a crash the cpu calling this is uncertain of its own
identity, and somehow manages to stop itself - perhaps we are so screwed up
in this case that we have no hope anyway)
*) Even if it succeeds - we may still run into problems busting locks because
even though the cpu that held them isn't executing, the data structures
or device registers protected by the lock may be in an inconsistent state.
*) If we had just let this other cpus keep running, they'd have finished their
operation and freed up the problem lock anyway

-Tony


\
 
 \ /
  Last update: 2012-01-20 18:59    [W:0.095 / U:0.408 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site