lkml.org 
[lkml]   [2018]   [Jan]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v5 0/2] printk: Console owner and waiter logic cleanup
Hey,

On Tue, Jan 23, 2018 at 11:13:30AM -0500, Steven Rostedt wrote:
> From what I understand is that there's an issue with one of the printk
> consoles, due to memory pressure or whatnot. Then a printk happens
> within a printk recursively. It gets put into the safe buffer and an
> irq is sent to printk this printk.
>
> The issue you are saying is that when the printk enables interrupts,
> the irq work triggers and loads the log buffer with the safe buffer, and
> then the printk sees the new data added and continues to print, and
> hence never leaves this printk.

I'm not sure it's irq or the same calling context, but yeah whatever
it may be, it keeps adding new data.

> Your solution is to delay the flushing of the safe buffer to another
> thread (work queue), which I also have issues with, because you break
> the "get printks out ASAP mantra". Then the work queue comes in and
> flushes the printks. And since the printks cause printks, we continue
> to spam the machine, but hey, we are making forward progress.

I'm not sure "get printks out ASAP mantra" is the overriding concern
after spending 20s flushing in an unknown context. I'm honestly
curious. Would that still matter that much at that point? I went
through the recent common crashes in the fleet earlier today and a
good number of them are printk taking too long unnecessarily
escalating the situation (most commonly triggering NMI watchdog). I'm
not saying that this should override other concerns but it seems clear
to me that we're pretty badly exposed on this front.

> Again, this is treating the symptom and not solving the problem.

Or adding a safety net when things go south, but this isn't what I was
trying to argue. I mostly thought your understanding of what I
reported wasn't accurate and wanted to clear that up.

> I really hate delaying printks to another thread, unless we can
> guarantee that that thread is ready to go immediately (basically
> spinning on a run queue waiting to print). Because if the system is
> having issues (which is the main reason for printks to happen), there's
> no guarantee that a work queue or another thread will ever schedule,
> and the safe printk buffer never gets out to the consoles.
>
> I much rather have throttling when recursive printks are detected.
> Make it a 100 lines to print if you want, but then throttle. Because
> once you have 100 lines or so, you will know that printks are causing
> printks, and you don't give a crap about the repeated process. Allow
> one flushing of the printk safe buffers, and then if it happens again,
> throttle it.
>
> Both methods can lose important data. I believe the throttling of
> recursive printks, after 100 prints or whatever, will be the least
> likely to lose important data, because printks caused by printks will
> just keep repeating the same data, and we don't care about repeats. But
> delaying the flushing could very well lose important data that caused
> a lockup.

Hmmm... what you're suggesting still seems more fragile - ie. when
does that 100 count get reset? OOM prints quite a few lines and if
we're resetting on each line, that two order explosion of messages can
still be really really bad. And issues like that seem to suggest that
the root problem to handle here is avoiding locking up a context in
flushing for too long. Your approach is trying to avoid causing that
but it's a symptom which can be reached in many different ways.

Thanks.

--
tejun

\
 
 \ /
  Last update: 2018-01-23 18:22    [W:0.740 / U:0.020 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site