lkml.org 
[lkml]   [2010]   [Jun]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [linux-pm] [PATCH v4] pm_qos: make update_request non blocking
On Fri, 11 Jun 2010 09:25:52 -0500
James Bottomley <James.Bottomley@suse.de> wrote:

> On Thu, 2010-06-10 at 16:41 +0200, Florian Mickler wrote:
> > > > So the notified value is always the latest or there is another
> > > > notification underway.
> > >
> > > Well, no ... it's a race, and like all good races the winner is non
> > > deterministic.
> >
> > Can you point out where I'm wrong?
> >
> > U1. update_request gets called
> > U2. new extreme value gets calculated under spinlock
> > U3. notify gets queued if its WORK_PENDING_BIT is not set.
> >
> > run_workqueue() does the following:
> > R1. clears the WORK_PENDING_BIT
> > R2. calls update_notify()
> > R3. reads the current extreme value
> > R4. notification gets called with that value
> >
> >
> > If another update_request comes to schedule_work before
> > run_workqueue() has cleared the WORK_PENDING_BIT, the work will not be
> > requeued, but R3 isn't yet executed. So the notifiers will get the last
> > value.
>
> So the race now only causes lost older notifications ... as long as the
> consumers are OK with that (it is an API change) then this should work.
> You're still not taking advantage of the user context passed in, though,
> so this does needlessly delay notifications for that case.

Right. We can use execute_in_process_context.

> Actually, pm_qos_remove now needs a flush_scheduled work since you don't
> want to return until the list is clear (since the next action may be to
> free the object).

Yes. Good point, will fix.

> James
>

Cheers,
Flo



\
 
 \ /
  Last update: 2010-06-11 17:53    [W:0.095 / U:0.280 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site