lkml.org 
[lkml]   [2019]   [Jul]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: RFC: call_rcu_outstanding (was Re: WARNING in __mmdrop)
    On Mon, Jul 22, 2019 at 09:25:51AM -0700, Paul E. McKenney wrote:
    > On Mon, Jul 22, 2019 at 12:13:40PM -0400, Michael S. Tsirkin wrote:
    > > On Mon, Jul 22, 2019 at 08:55:34AM -0700, Paul E. McKenney wrote:
    > > > On Mon, Jul 22, 2019 at 11:47:24AM -0400, Michael S. Tsirkin wrote:
    > > > > On Mon, Jul 22, 2019 at 11:14:39AM -0400, Joel Fernandes wrote:
    > > > > > [snip]
    > > > > > > > Would it make sense to have call_rcu() check to see if there are many
    > > > > > > > outstanding requests on this CPU and if so process them before returning?
    > > > > > > > That would ensure that frequent callers usually ended up doing their
    > > > > > > > own processing.
    > > > > >
    > > > > > Other than what Paul already mentioned about deadlocks, I am not sure if this
    > > > > > would even work for all cases since call_rcu() has to wait for a grace
    > > > > > period.
    > > > > >
    > > > > > So, if the number of outstanding requests are higher than a certain amount,
    > > > > > then you *still* have to wait for some RCU configurations for the grace
    > > > > > period duration and cannot just execute the callback in-line. Did I miss
    > > > > > something?
    > > > > >
    > > > > > Can waiting in-line for a grace period duration be tolerated in the vhost case?
    > > > > >
    > > > > > thanks,
    > > > > >
    > > > > > - Joel
    > > > >
    > > > > No, but it has many other ways to recover (try again later, drop a
    > > > > packet, use a slower copy to/from user).
    > > >
    > > > True enough! And your idea of taking recovery action based on the number
    > > > of callbacks seems like a good one while we are getting RCU's callback
    > > > scheduling improved.
    > > >
    > > > By the way, was this a real problem that you could make happen on real
    > > > hardware?
    > >
    > > > If not, I would suggest just letting RCU get improved over
    > > > the next couple of releases.
    > >
    > > So basically use kfree_rcu but add a comment saying e.g. "WARNING:
    > > in the future callers of kfree_rcu might need to check that
    > > not too many callbacks get queued. In that case, we can
    > > disable the optimization, or recover in some other way.
    > > Watch this space."
    >
    > That sounds fair.
    >
    > > > If it is something that you actually made happen, please let me know
    > > > what (if anything) you need from me for your callback-counting EBUSY
    > > > scheme.
    > > >
    > > > Thanx, Paul
    > >
    > > If you mean kfree_rcu causing OOM then no, it's all theoretical.
    > > If you mean synchronize_rcu stalling to the point where guest will OOPs,
    > > then yes, that's not too hard to trigger.
    >
    > Is synchronize_rcu() being stalled by the userspace loop that is invoking
    > your ioctl that does kfree_rcu()? Or instead by the resulting callback
    > invocation?
    >
    > Thanx, Paul

    Sorry, let me clarify. We currently have synchronize_rcu in a userspace
    loop. I have a patch replacing that with kfree_rcu. This isn't the
    first time synchronize_rcu is stalling a VM for a long while so I didn't
    investigate further.

    --
    MST

    \
     
     \ /
      Last update: 2019-07-22 18:32    [W:4.059 / U:0.208 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site