lkml.org 
[lkml]   [2021]   [Apr]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    SubjectRe: [PATCH v2 09/10] KVM: Don't take mmu_lock for range invalidation unless necessary
    Date
    On 02/04/21 02:56, Sean Christopherson wrote:
    > Avoid taking mmu_lock for unrelated .invalidate_range_{start,end}()
    > notifications. Because mmu_notifier_count must be modified while holding
    > mmu_lock for write, and must always be paired across start->end to stay
    > balanced, lock elision must happen in both or none. To meet that
    > requirement, add a rwsem to prevent memslot updates across range_start()
    > and range_end().
    >
    > Use a rwsem instead of a rwlock since most notifiers _allow_ blocking,
    > and the lock will be endl across the entire start() ... end() sequence.
    > If anything in the sequence sleeps, including the caller or a different
    > notifier, holding the spinlock would be disastrous.
    >
    > For notifiers that _disallow_ blocking, e.g. OOM reaping, simply go down
    > the slow path of unconditionally acquiring mmu_lock. The sane
    > alternative would be to try to acquire the lock and force the notifier
    > to retry on failure. But since OOM is currently the _only_ scenario
    > where blocking is disallowed attempting to optimize a guest that has been
    > marked for death is pointless.
    >
    > Unconditionally define and use mmu_notifier_slots_lock in the memslots
    > code, purely to avoid more #ifdefs. The overhead of acquiring the lock
    > is negligible when the lock is uncontested, which will always be the case
    > when the MMU notifiers are not used.
    >
    > Note, technically flag-only memslot updates could be allowed in parallel,
    > but stalling a memslot update for a relatively short amount of time is
    > not a scalability issue, and this is all more than complex enough.

    Proposal for the locking documentation:

    diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/locking.rst
    index b21a34c34a21..3e4ad7de36cb 100644
    --- a/Documentation/virt/kvm/locking.rst
    +++ b/Documentation/virt/kvm/locking.rst
    @@ -16,6 +16,13 @@ The acquisition orders for mutexes are as follows:
    - kvm->slots_lock is taken outside kvm->irq_lock, though acquiring
    them together is quite rare.

    +- The kvm->mmu_notifier_slots_lock rwsem ensures that pairs of
    + invalidate_range_start() and invalidate_range_end() callbacks
    + use the same memslots array. kvm->slots_lock is taken outside the
    + write-side critical section of kvm->mmu_notifier_slots_lock, so
    + MMU notifiers must not take kvm->slots_lock. No other write-side
    + critical sections should be added.
    +
    On x86, vcpu->mutex is taken outside kvm->arch.hyperv.hv_lock.

    Everything else is a leaf: no other lock is taken inside the critical
    Paolo

    \
     
     \ /
      Last update: 2021-04-02 11:36    [W:5.414 / U:0.328 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site