lkml.org 
[lkml]   [2017]   [May]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH v4 1/5] sched/deadline: Refer to cpudl.elements atomically
    On Mon, May 15, 2017 at 09:36:29AM +0100, Juri Lelli wrote:
    > Hi,
    >
    > On 12/05/17 10:25, Steven Rostedt wrote:
    > > On Fri, 12 May 2017 14:48:45 +0900
    > > Byungchul Park <byungchul.park@lge.com> wrote:
    > >
    > > > cpudl.elements is an instance that should be protected with a spin lock.
    > > > Without it, the code would be insane.
    > >
    > > And how much contention will this add? Spin locks in the scheduler code
    > > that are shared among a domain can cause huge latency. This was why I
    > > worked hard not to add any in the cpupri code.
    > >
    > >
    > > >
    > > > Current cpudl_find() has problems like,
    > > >
    > > > 1. cpudl.elements[0].cpu might not match with cpudl.elements[0].dl.
    > > > 2. cpudl.elements[0].dl(u64) might not be referred atomically.
    > > > 3. Two cpudl_maximum()s might return different values.
    > > > 4. It's just insane.
    > >
    > > And lockless algorithms usually are insane. But locks come with a huge
    > > cost. What happens when we have 32 core domains. This can cause
    > > tremendous contention and makes the entire cpu priority for deadlines
    > > useless. Might as well rip out the code.
    > >
    >
    > Right. So, rationale for not taking any lock in the find() path (at the
    > risk of getting bogus values) is that we don't want to pay to much in
    > terms of contention, when also considering the fact that find_lock_later_
    > rq() might then release the rq lock, possibly making the search useless
    > (if things change in the meantime anyway). The update path is instead
    > guarded by a lock, to ensure consistency.
    >
    > Experiments on reasonably big machines (48-cores IIRC) showed that the
    > approach was "good enough", so we looked somewhere else to improve
    > things (as there are many to improve :). This of course doesn't prevent
    > us to look at this again now and see if we need to do something about it.
    >
    > Having numbers about introduced overhead and wrong decisions caused by
    > the lockless find() path would help a lot understanding what (and can)
    > be done.

    I see what you say. Agree..

    Hm.. Before that, what do you think about my suggestions in my reply to
    steven?

    >
    > Thanks!
    >
    > - Juri

    \
     
     \ /
      Last update: 2017-05-16 09:02    [W:2.614 / U:0.004 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site