lkml.org 
[lkml]   [2012]   [Sep]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] pstore: avoid recursive spinlocks in the oops_in_progress case
On Thu, Sep 20, 2012 at 11:48:32PM +0000, Luck, Tony wrote:
> > True, but the lock is used to protect pstore->buf, I doubt that
> > any backend will actually want to grab it, no?
>
> The lock is doing double duty to protect the buffer, and the back-end driver.
>
> But even if we split it into two (one for the buffer, taken by pstore, and one
> internal to the backend to protect interaction with the f/w). Ifwe ignore the
> fact that we can't get the lock that protects the buffer means it is very likely
> that we corrupt the previous record that was being written by clobbering the
> buffer with the data for this new record.
>
> I'd prefer to maximize the chances that the earlier record gets written.

Sure, I applied the original patch.

Btw, do you expect that backends protect themselves from concurrent
->write calls, or pstore guarantees to protect backends?

Because the latter is not always possible, for example in tracing: we
won't able to grab locks at all (but not all backends can do tracing
anyway -- they must do things atomically).

Plus, sometimes having the global lock is not "efficient", backends
know better: they might have separate locks per message type.

And my plan was to get rid of the fact that backends touch pstore->buf
directly. Backends would always receive anonymous 'buf' pointer (we
already have write_buf callback that does exactly this), and thus it
would be backends' worry to protect against concurrency. In this
scheme, pstore's console code won't need to grab locks at all: we'll
just pass console string to the backend directly.

And backends, if they can't do writes atomically, will grab their
own locks.

Thanks,
Anton.


\
 
 \ /
  Last update: 2012-09-21 03:21    [W:0.136 / U:0.376 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site