lkml.org 
[lkml]   [2017]   [Jun]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC][PATCHv3 2/5] printk: introduce printing kernel thread
On Wed 2017-05-31 16:22:33, Sergey Senozhatsky wrote:
> Hello Petr,
>
> On (05/29/17 11:29), Petr Mladek wrote:
> [..]
> > > +static inline bool console_offload_printing(void)
> > > +{
> > > + * A trivial emergency enforcement - give up on printk_kthread if
> > > + * we can't wake it up. This assumes that `atomic_print_limit' is
> > > + * reasonably and sufficiently large.
> > > + */
> > > + if (lines_printed > 10 * (unsigned long long)atomic_print_limit &&
> > > + saved_csw == (current->nvcsw + current->nivcsw)) {
> > > + printk_enforce_emergency = true;
> > > + pr_crit("Declaring printk emergency mode.\n");
> > > + return false;
> > > + }
> >
> > It might work only if the waken printk kthread is scheduled in a predictable
> > time period.
>
> > You try to achieve this by setting real time priority
>
> yes, that's the reason.
>
> > (4th patch of this patchset). The question is how this would work:
>
> > First, the real time priority is questionable on its own. Logging
> > is important but the real time priority is dangerous. Any "flood"
> > of messages will starve all other processes with normal priority.
> > It is better than a softlockup but it might cause problems as well.
>
> so I try to minimize the negative impact of RT prio here. printk_kthread
> is not special any more. it's an auxiliary kthread that we sometimes
> wake_up. the thing is that printk_kthread also must offload at some
> point, basically the same `atomic_print_limit' limit applies to it as
> well.

You might call cond_resched() outside console_unlock(). But you have
to keep printk_kthread in runnable state as long as there are pending
messages. Then scheduler will always prefer this RT task over non-RT
tasks. Or am I wrong?

The more I think about it, the more I am persuaded that RT priority
is no no for printk_kthread.


> > Third, if the console_trylock() fails below, it is not guaranteed that
> > the other waiter will really continue printing. The console_trylock()
> > would fail because someone else called console_lock(), failed to get it,
> > and went into sleep. There is no guarantee that it will be waken
> > once again.
>
> please explain, why wouldn't it? we call up() after every line we print
> once the task exceeds `atomic_print_limit'. you mean there probably
> won't be any tasks in console_sem wait list? just for that case we have
> 'if (current == printk_kthread)' branch. `printk_kthread' can indeed be
> the one and only task to down() the console_sem.

I think that I was unclear and talked only about one possibility.

console_trylock() fails if anyone else already owns the lock
or if anyone else is in the wait list. I talked about the 2nd
possibility. But both ways look tricky:


1. We are talking about flood of messages. printk() does
console_trylock() and does not add itself into the wait list.

It means that another printk() could take over the console
stuff only when its console_trylock() is called in the small
window when console_unlock() releases the lock:

CPU0 CPU1

console_unlock() printk()
//handle few lines // store message into log buffer
up_console_sem()
console_trylock()

if (pending_output)
console_trylock()

The printk() on CPU1 got console_lock only because it called
console_trylock() in the right time. This looks too tricky
to me.

Note that the interleaving of the two processes is complicated
also because they both take logbug_lock. It might affect the
chances as well (in possive or negative way).


2. There are only few functions that call console_lock() and
add itself into the wait queue.

They will get the console_lock directly only when they race
the same way as printk() described above.

The more likely scenario is that they will fail to get
console_lock because someone else is in console_unlock().
They will add itself into the wait queue and get into
sleep.

Then up_console_sem() will set TASK_RUNNING for the waiting
process and add it to the run queue. The console_trylock()
in console_unlock() will fail because the lock is assigned
to the waiter. But the waiter is not running. It is just
in the run queue. There is no quarantee how soon it will
get scheduled. It might not happen at all if the system
is in bad state.


> > + slow => async =>
> > + might be far behind
> > + loosing messages
> > + showing only ones with high log level
> > + possible softlockup
> > + not guaranteed when dying
>
> yeah. good summary. the previous async printk implementation...
> we officially hate it now.
>
> [..]
> > The patch tries to reduce the negative impact by detecting when
> > the negative impact is visible and switching back to the sync mode.
>
> um, sorry, no. or I simply don't follow. the patch keeps the existing sync
> mode and switches to async mode when the problem probably can show up. we
> don't work in async mode by default. only when task reaches atomic_print_limit.

I talked about situation where the async mode was active and we tried
to detect emergency mode and forced the sync mode again.


> [..]
> > + I would suggest to detect flood of messages by a difference
> > between log_next_seq and console_seq. We could take into
> > account the number of messages in the buffer
> > (log_next_seq - log_first_seq)
>
> yeah, I think I already posted a patch that would do exactly this
> thing. the problem with printk flood is ... if you have at least 2 CPUs
> flooding logbuf ... other CPU is still enjoying ... just log_store()

Just thinking loudly. I wonder if we could somehow detect this and
throttle such processes in printk(). I mean to make them spin there
for a bit ;-) Well, I would leave this for later optimizations.


> > In addition, I think that a semi-sync-async mode might better
> > handle the sudden dead. What is it? I have already talked about
> > something similar last time.
>
> but this is what the patch does. but default we are in sync mode.
> assuming that `atomic_print_limit' is large enough it can take
> seconds before we offload printing to printk_kthread. so that
> semi-sync-async is what we are trying to have. am I missing something?

Perfect. I am really sorry, I was too deep in thinking and missed how
close the two proposals actually were.


> and that's why I'm *a bit* (just *really a bit*) less concerned about cases
> that we missed. I expect that `atomic_print_limit' usually will be around
> thousands of lines, if not tens of thousands, giving enough time to sync
> printing.

Well, the limit must be low enough to prevent soft lockup. In fact, it
should be low enough to keep almost normal scheduling rythm. It does
not make sense to block "innoncent" processes for seconds when they do
important job and just by chance printed their rare message.

On the other hand, it should be big enough to allow flushing
some average log, e.g. lockdep, sysrq report.

I might be too optimistic. Maybe it is not possible to tune it
this way.

> > Normal printk() should do something like:
> >
> > printk()
> > vprintk_emit()
> > log_store()
> > if (!deferred)
> > console_trylock()
> > console_unlock()
> > while (num_lines < atomic_print_limit)
> > handle message
> > up_console_sem()
> > if (pending_messages)
> > wakeup(printk_kthread)
>
> this is close to what we do.

I see it now.

> at the same we have better guarantees.
> we don't just wakeup(printk_kthread) and leave. we wait for any other
> process to re-take the console_sem. until this happens we can't leave
> console_unlock().

And this is my problem. I am scared of the waiting. It is very hard
to predict, especially without RT priority. But it is tricky anyway,
see above.

It makes it very hard to tune the emergency situation detection.
If we enable the emergency mode too easily, the soft lockups will
be back again.

In addition, it adds another dependecy on the scheduler behavior.
It is a can of worms as explained by Jan Kara in another mail.

>
> > and printk_kthread should do:
> >
> > while() {
> >
> > if (!pending_messages)
> > sleep()
> >
> > console_lock()
> >
> > preempt_disable()
> > console_unlock()
> > while (pending_message && !need_resched)
> > handle message
> > up_console_sem()
> > preempt_enable()
> > }
>
> hm. I don't want printk_kthread to be special. just because there are cases
> when printk_kthread won't be there. we had too many problems with relying on
> printk_kthread in all the corner cases. I want printk_kthread to be just one
> extra process that can do the printing for us.
> if we have X tasks sleeping in UNINTERRUPTIBLE on console_sem then
> we better use them; keeping them in UNINTERRUPTIBLE as long as
> printk_kthread has pending messages does no good.

I am a bit confused by this paragraph. What exactly makes the
printk_kthread special in my proposal? Is the main problem
that it checks need_resched instead of atomic_print_limit?

You are right that it should not block other waiters. And I forgot
to take other console_lock() callers into account.

My motivation was exactly the opposite. I wanted to reduce
unnecessary up_console_sem()/down_console_sem() dance
and slow down console output by this. The printk_kthread
is special after all because its only dute is to flush the console.
Also it is the only process where we know for sure that it does
the job in normal context. It looked to me that it made sense
to use its assigned time slot as much as possible.

Anyway, this is just an optimization. Let's handle printk_kthread
like any other process for now.


Summary:

Our two proposals are very close after all. I suggest to make
the following changes in your patch:

+ Remove the waiting for another console_lock owner. It is
too tricky.

+ Instead try to reduce sleeping with console_lock taken.
Remove that cond_resched() from console_unlock(). Maybe
even call console_trylock()/console_unlock() with
disabled preemtion again. It will increase the chance
that anyone else will continue handling the console.

+ keep the code to force sync mode in known emergency
situations (halt, suspend, ...).


This way we should be good in all situations:

+ sudden death because we are in sync mode until atomic_limit
is reached

+ flood of messages because printk() does not sleep with
console_lock taken. Either someone is flushing console
or any printk() call could continue flushing the console.

+ critical situations because we force the sync mode
explicitely

How does that sound, please?

Best Regards,
Petr

PS: I am sorry for the late reply. I was deep in other things.
I wanted to anwer this with a clean head.

\
 
 \ /
  Last update: 2017-06-28 14:20    [W:0.192 / U:0.988 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site