[lkml]   [2005]   [Mar]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: Real-Time Preemption and RCU
    On Fri, 18 Mar 2005, Ingo Molnar wrote:

    > * Ingo Molnar <> wrote:
    > > [...] How about something like:
    > >
    > > void
    > > rcu_read_lock(void)
    > > {
    > > preempt_disable();
    > > if (current->rcu_read_lock_nesting++ == 0) {
    > > current->rcu_read_lock_ptr =
    > > &__get_cpu_var(rcu_data).lock;
    > > preempt_enable();
    > > read_lock(current->rcu_read_lock_ptr);
    > > } else
    > > preempt_enable();
    > > }
    > >
    > > this would still make it 'statistically scalable' - but is it correct?
    > thinking some more about it, i believe it's correct, because it picks
    > one particular CPU's lock and correctly releases that lock.
    > (read_unlock() is atomic even on PREEMPT_RT, so rcu_read_unlock() is
    > fine.)

    Why can should there only be one RCU-reader per CPU at each given
    instance? Even on a real-time UP system it would be very helpfull to have
    RCU areas to be enterable by several tasks as once. It would perform
    better, both wrt. latencies and throughput:
    With the above implementation an high priority task entering an RCU area
    will have to boost the current RCU reader, make a task switch until that
    one finishes and makes yet another task switch. to get back to the high
    priority task. With an RCU implementation which can take n RCU readers per CPU
    there is no such problem.

    Also having all tasks serializing on one lock (per CPU) really destroys
    the real-time properties: The latency of anything which uses RCU will be
    the worst latency of anything done under the RCU lock.

    When I looked briefly at it in the fall the following solution jumped into
    mind: Have a RCU-reader count, rcu_read_count, for each CPU. When you
    enter an RCU read region increment it and decrement it when you go out of
    it. When it is 0, RCU cleanups are allowed - a perfect quiescent state. At
    that point call rcu_qsctr_inc() at that point. Or call it in schedule() as
    now just with a if(rcu_read_count==0) around it.

    I don't think I understand the current code. But if it works now with
    preempt_disable()/preempt_enable() around all the read-regions it ought to
    work with
    around the same regions and the above check for rcu_read_count==0 in or
    around rcu_qsctr_inc() as well.

    It might take a long time before the rcu-batches are actually called,
    though, but that is a different story, which can be improved upon. An
    improvemnt would be to boost the none-RT tasks entering a rcu-read region
    into the lowest RT-priority. That way there can't be a lot of low
    priority tasks hanging around making rcu_read_count non-zero for a long
    period of time since these tasks only can be preempted by RT tasks while
    in the RCU-region.

    > Ingo


    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to
    More majordomo info at
    Please read the FAQ at

     \ /
      Last update: 2005-03-22 14:11    [W:0.038 / U:3.600 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site