lkml.org 
[lkml]   [2017]   [May]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] printk: Add best-effort printk() buffering.
On (05/09/17 20:41), Tetsuo Handa wrote:
[..]
> > what I meant was -- "can we sleep under printk_buffered_begin() or not".
> > printk-safe disables local IRQs. so what I propose is something like this
> >
> > printk-safe-enter //disable local IRQs, use per-CPU buffer
> > backtrace
> > printk-safe-exit //flush per-CPU buffer, enable local IRQs
> >
> > except that 'printk-safe-enter/exit' will have new names here, say
> > printk-buffered-begin/end, and, probably, handle flush differently.
>
> OK. Then, answer is that we are allowed to sleep after get_printk_buffer()
> if get_printk_buffer() is called from schedulable context because different
> printk_buffer will be assigned by get_printk_buffer() if get_printk_buffer()
> is called from non-schedulable context.
>
> >
> >
> > > > hm, 16 is rather random, it's too much for UP and probably not enough for
> > > > a 240 CPUs system. for the time being there are 3 buffered-printk users
> > > > (as far as I can see), but who knows how more will be added in the future.
> > > > each CPU can have overlapping printks from process, IRQ and NMI contexts.
> > > > for NMI we use printk-nmi buffers, so it's out of the list; but, in general,
> > > > *it seems* that we better depend on the number of CPUs the system has.
> > > > which, once again, returns us back to printk-safe...
> > > >
> > > > thoughts?
> > >
> > > I can make 16 a CONFIG_ option.
> >
> > but still, why use additional N buffers, when we already have per-CPU
> > buffers? what am I missing?
>
> Per-CPU buffers need to disable preemption by disabling local hard
> IRQ / soft IRQ. But printk_buffers need not to disable preemption.

yes. ok. seems that I can't explain what I want.

my point is:

printk-buffered does not disable preemption and we can sleep under
printk-buffered-begin. fine. but why would you want to sleep there anyway?
you just want to print a backtrace and be done with it. and backtracing
does not sleep, afaiu, or it least it should not, because it must be
possible to dump_stack() from atomic context. so why have

printk-buffered keeps preemption and irqs enable and uses one
of aux buffers (if any).

instead of

printk-buffered starts an atomic section - it disables preemption
and local irqs, because it uses per-CPU buffer (which is always,
and already, there).

?

[..]
> > hm, from a schedulable context you can do *something* like
> >
> > console_lock()
> > printk()
> > ...
> > printk()
> > console_unlock()
> >
> >
> > you won't be able to console_lock() until all pending messages are
> > flushed. since you are in a schedulable context, you can sleep on
> > console_sem in console_lock(). well, just saying.
>
> console_lock()/console_unlock() pair is different from what I want.
>
> console_lock()/console_unlock() pair blocks as long as somebody else
> is printk()ing. What I want is an API for
>
> current thread waits for N bytes to be written to console devices
> if current thread stored N bytes using printk(), but allow using some
> timeout and killable because waiting unconditionally forever is not good
> (e.g. current thread is expected to bail out soon if OOM-killed during
> waiting for N bytes to be written to console devices)

I assume you are talking here about a completely new API, not related to
the patch in question (because your patch does not do this). right?

-ss

\
 
 \ /
  Last update: 2017-05-10 21:20    [W:0.106 / U:1.920 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site