lkml.org 
[lkml]   [2018]   [Jul]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH v2] kvm/x86: Inform RCU of quiescent state when entering guest mode
    On Wed, Jul 11, 2018 at 11:11:19PM +0200, Christian Borntraeger wrote:
    >
    >
    > On 07/11/2018 10:27 PM, Paul E. McKenney wrote:
    > > On Wed, Jul 11, 2018 at 08:39:36PM +0200, Christian Borntraeger wrote:
    > >>
    > >>
    > >> On 07/11/2018 08:36 PM, Paul E. McKenney wrote:
    > >>> On Wed, Jul 11, 2018 at 11:20:53AM -0700, Paul E. McKenney wrote:
    > >>>> On Wed, Jul 11, 2018 at 07:01:01PM +0100, David Woodhouse wrote:
    > >>>>> From: David Woodhouse <dwmw@amazon.co.uk>
    > >>>>>
    > >>>>> RCU can spend long periods of time waiting for a CPU which is actually in
    > >>>>> KVM guest mode, entirely pointlessly. Treat it like the idle and userspace
    > >>>>> modes, and don't wait for it.
    > >>>>>
    > >>>>> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
    > >>>>
    > >>>> And idiot here forgot about some of the debugging code in RCU's dyntick-idle
    > >>>> code. I will reply with a fixed patch.
    > >>>>
    > >>>> The code below works just fine as long as you don't enable CONFIG_RCU_EQS_DEBUG,
    > >>>> so should be OK for testing, just not for mainline.
    > >>>
    > >>> And here is the updated code that allegedly avoids splatting when run with
    > >>> CONFIG_RCU_EQS_DEBUG.
    > >>>
    > >>> Thoughts?
    > >>>
    > >>> Thanx, Paul
    > >>>
    > >>> ------------------------------------------------------------------------
    > >>>
    > >>> commit 12cd59e49cf734f907f44b696e2c6e4b46a291c3
    > >>> Author: David Woodhouse <dwmw@amazon.co.uk>
    > >>> Date: Wed Jul 11 19:01:01 2018 +0100
    > >>>
    > >>> kvm/x86: Inform RCU of quiescent state when entering guest mode
    > >>>
    > >>> RCU can spend long periods of time waiting for a CPU which is actually in
    > >>> KVM guest mode, entirely pointlessly. Treat it like the idle and userspace
    > >>> modes, and don't wait for it.
    > >>>
    > >>> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
    > >>> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
    > >>> [ paulmck: Adjust to avoid bad advice I gave to dwmw, avoid WARN_ON()s. ]
    > >>>
    > >>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
    > >>> index 0046aa70205a..b0c82f70afa7 100644
    > >>> --- a/arch/x86/kvm/x86.c
    > >>> +++ b/arch/x86/kvm/x86.c
    > >>> @@ -7458,7 +7458,9 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
    > >>> vcpu->arch.switch_db_regs &= ~KVM_DEBUGREG_RELOAD;
    > >>> }
    > >>>
    > >>> + rcu_kvm_enter();
    > >>> kvm_x86_ops->run(vcpu);
    > >>> + rcu_kvm_exit();
    > >>
    > >> As indicated in my other mail. This is supposed to be handled in the guest_enter|exit_ calls around
    > >> the run function. This would also handle other architectures. So if the guest_enter_irqoff code is
    > >> not good enough, we should rather fix that instead of adding another rcu hint.
    > >
    > > Something like this, on top of the earlier patch? I am not at all
    > > confident of this patch because there might be other entry/exit
    > > paths I am missing. Plus there might be RCU uses on the arch-specific
    > > patch to and from the guest OS.
    > >
    > > Thoughts?
    > >
    >
    > If you instrment guest_enter/exit, you should cover all cases and all architectures as far
    > as I can tell. FWIW, we did this rcu_note thing back then actually handling this particular
    > case of long running guests blocking rcu for many seconds. And I am pretty sure that
    > this did help back then.

    And my second patch on the email you replied to replaced the only call
    to rcu_virt_note_context_switch(). So maybe it covers what it needs to,
    but yes, there might well be things I missed. Let's see what David
    comes up with.

    What changed was RCU's reactions to longish grace periods. It used to
    be very aggressive about forcing the scheduler to do otherwise-unneeded
    context switches, which became a problem somewhere between v4.9 and v4.15.
    I therefore reduced the number of such context switches, which in turn
    caused KVM to tell RCU about quiescent states way too infrequently.

    The advantage of the rcu_kvm_enter()/rcu_kvm_exit() approach is that
    it tells RCU of an extended duration in the guest, which means that
    RCU can ignore the corresponding CPU, which in turn allows the guest
    to proceed without any RCU-induced interruptions.

    Does that make sense, or am I missing something? I freely admit to
    much ignorance of both kvm and s390! ;-)

    Thanx, Paul

    \
     
     \ /
      Last update: 2018-07-15 22:05    [W:4.161 / U:0.044 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site