lkml.org 
[lkml]   [2019]   [Nov]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: [PATCH 11/11] x86,rcu: use percpu rcu_preempt_depth
    From
    Date


    On 2019/11/1 9:13 下午, Peter Zijlstra wrote:
    > On Fri, Nov 01, 2019 at 05:58:16AM -0700, Paul E. McKenney wrote:
    >> On Thu, Oct 31, 2019 at 10:08:06AM +0000, Lai Jiangshan wrote:
    >>> +/* We mask the RCU_NEED_SPECIAL bit so that it return real depth */
    >>> +static __always_inline int rcu_preempt_depth(void)
    >>> +{
    >>> + return raw_cpu_read_4(__rcu_preempt_depth) & ~RCU_NEED_SPECIAL;
    >>
    >> Why not raw_cpu_generic_read()?
    >>
    >> OK, OK, I get that raw_cpu_read_4() translates directly into an "mov"
    >> instruction on x86, but given that x86 percpu_from_op() is able to
    >> adjust based on operand size, why doesn't something like raw_cpu_read()
    >> also have an x86-specific definition that adjusts based on operand size?
    >
    > The reason for preempt.h was header recursion hell.

    Oh, I didn't notice. May we can use raw_cpu_generic_read
    for rcu here, I will have a try.

    Thanks
    Lai.

    >
    >>> +}
    >>> +
    >>> +static __always_inline void rcu_preempt_depth_set(int pc)
    >>> +{
    >>> + int old, new;
    >>> +
    >>> + do {
    >>> + old = raw_cpu_read_4(__rcu_preempt_depth);
    >>> + new = (old & RCU_NEED_SPECIAL) |
    >>> + (pc & ~RCU_NEED_SPECIAL);
    >>> + } while (raw_cpu_cmpxchg_4(__rcu_preempt_depth, old, new) != old);
    >>
    >> Ummm...
    >>
    >> OK, as you know, I have long wanted _rcu_read_lock() to be inlineable.
    >> But are you -sure- that an x86 cmpxchg is faster than a function call
    >> and return? I have strong doubts on that score.
    >
    > This is a regular CMPXCHG instruction, not a LOCK prefixed one, and that
    > should make all the difference
    >
    >> Plus multiplying the x86-specific code by 26 doesn't look good.
    >>
    >> And the RCU read-side nesting depth really is a per-task thing. Copying
    >> it to and from the task at context-switch time might make sense if we
    >> had a serious optimization, but it does not appear that we do.
    >>
    >> You original patch some years back, ill-received though it was at the
    >> time, is looking rather good by comparison. Plus it did not require
    >> architecture-specific code!
    >
    > Right, so the per-cpu preempt_count code relies on the preempt_count
    > being invariant over context switches. That means we never have to
    > save/restore the thing.
    >
    > For (preemptible) rcu, this is 'obviously' not the case.
    >
    > That said, I've not looked over this patch series, I only got 1 actual
    > patch, not the whole series, and I've not had time to go dig out the
    > rest..
    >

    \
     
     \ /
      Last update: 2019-11-01 16:48    [W:3.173 / U:0.552 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site