lkml.org 
[lkml]   [2012]   [Jun]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: deferring __fput()
On Fri, Jun 22, 2012 at 08:44:58AM -0400, Mimi Zohar wrote:
> Al,
>
> I really appreciate all of the work that has gone into making __fput()
> lockless - making the syscalls use get/put_light(), signal changes, and
> using Oleg's task_work_add() to defer processes.
>
> Currently, when unmapping a file after closing it, I'm not getting the
> mmap_sem/i_mutex lockdep any more. This seems due to Tyler Hick's
> commit 978d6d8 "vfs: Correctly set the dir i_mutex lockdep class",
> rather than the above changes.
>
> Although the mmap_sem/i_mutex lockdep doesn't exist anymore, before
> upstreaming IMA-appraisal, the issue with __fput() should probably be
> resolved. :) What else needs to be done to make __fput() lockless?
> How can I help?

Deadlock is still there. I'll resurrect the patch switching the final fput()
to task_work_add() and throw it into #for-next tomorrow; the main question is
what to do with fput() from kernel threads and whether we want to allow it
from interrupts - solution for the former (definitely needed) would cover
the latter as well. I suspect that we want to go for "just use
schedule_work()" for these cases; among other things, that would allow
to simplify aio.c quite a bit.

Somewhat curious part is what to do with daemonize() - by the time we do
(potentially) piles of fput() in there, we *are* a kernel thread. We won't
be doing any task_work() handling anymore. So it's "do the final fput()
ourselves" or "have them done asynchronously". OTOH, daemonize()
probably needs to be killed anyway - it had all callers in mainline
removed, only to get one more in drivers/staging. Which caller looks like
it ought to be switched to kthread_create(), along with other callers
of kernel_thread() in there...

What I have in mind is something like
if (unlikely(in_atomic() || current->flags & PF_KTHREAD))
move file to global list, use schedul_work() to have it dealt with
else
move to per-task list, do task_work_add() if it was empty (i.e.
if we hadn't scheduled its emptying already).
The latter is completely thread-synchronous, so we shouldn't need any locking
whatsoever. The former... I'd probably go for a single list, protected by
spin_lock_irq(), keeping the possibility to switch to per-CPU lists if we
find a load that gets serious contention on that list. In any case, worker will
start with taking the list contents, emptying the list and then killing the
suckers off one by one.


\
 
 \ /
  Last update: 2012-06-23 12:01    [W:0.337 / U:0.084 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site