lkml.org 
[lkml]   [2009]   [Feb]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] drm: Fix lock order reversal between mmap_sem and struct_mutex.
From
Date
On Thu, 2009-02-19 at 18:04 -0800, Eric Anholt wrote:
> On Thu, 2009-02-19 at 23:26 +0100, Peter Zijlstra wrote:
> > On Thu, 2009-02-19 at 22:02 +0100, Thomas Hellstrom wrote:
> > >
> > > It looks to me like the driver preferred locking order is
> > >
> > > object_mutex (which happens to be the device global struct_mutex)
> > > mmap_sem
> > > offset_mutex.
> > >
> > > So if one could avoid using the struct_mutex for object bookkeeping (A
> > > separate lock) then
> > > vm_open() and vm_close() would adhere to that locking order as well,
> > > simply by not taking the struct_mutex at all.
> > >
> > > So only fault() remains, in which that locking order is reversed.
> > > Personally I think the trylock ->reschedule->retry method with proper
> > > commenting is a good solution. It will be the _only_ place where locking
> > > order is reversed and it is done in a deadlock-safe manner. Note that
> > > fault() doesn't really fail, but requests a retry from user-space with
> > > rescheduling to give the process holding the struct_mutex time to
> > > release it.
> >
> > It doesn't do the reschedule -- need_resched() will check if the current
> > task was marked to be scheduled away, furthermore yield based locking
> > sucks chunks.

Imagine what would happen if your faulting task was the highest RT prio
task in the system, you'd end up with a life-lock.

> > What's so very difficult about pulling the copy_*_user() out from under
> > the locks?
>
> That we're expecting the data movement to occur while holding device
> state in place. For example, we write data through the GTT most of the
> time so we:
>
> lock struct_mutex
> pin the object to the GTT
> flushing caches as needed
> copy_from_user
> unpin object
> unlock struct_mutex

So you cannot drop the lock once you've pinned the dst object?

> If I'm to pull the copy_from_user out, that means I have to:
>
> alloc temporary storage
> for each block of temp storage size:
> copy_from_user
> lock struct_mutex
> pin the object to the GTT
> flush caches as needed
> memcpy
> unpin object
> unlock struct_mutex
>
> At this point of introducing our third copy of the user's data in our
> hottest path, we should probably ditch the pwrite path entirely and go
> to user mapping of the objects for performance. Requiring user mapping
> (which has significant overhead) cuts the likelihood of moving from
> user-space object caching to kernel object caching in the future, which
> has the potential of saving steaming piles of memory.

Or you could get_user_pages() to fault the user pages and pin them, and
then do pagefault_disable() and use copy_from_user_inatomic or such, and
release the pages again.





\
 
 \ /
  Last update: 2009-02-20 08:39    [W:0.063 / U:1.808 seconds]
©2003-2017 Jasper Spaans. hosted at Digital OceanAdvertise on this site