lkml.org 
[lkml]   [2023]   [Feb]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    Date
    SubjectRe: [Patch v2 2/5] KVM: x86/mmu: Optimize SPTE change flow for clear-dirty-log
    On Mon, Feb 6, 2023 at 2:06 PM Ben Gardon <bgardon@google.com> wrote:
    >
    > On Fri, Feb 3, 2023 at 11:28 AM Vipin Sharma <vipinsh@google.com> wrote:
    > >
    > > +static inline u64 kvm_tdp_mmu_clear_spte_bit(struct tdp_iter *iter, u64 mask)
    > > +{
    > > + atomic64_t *sptep;
    > > +
    > > + if (kvm_tdp_mmu_spte_has_volatile_bits(iter->old_spte, iter->level)) {
    > > + sptep = (atomic64_t *)rcu_dereference(iter->sptep);
    > > + return (u64)atomic64_fetch_and(~mask, sptep);
    > > + }
    > > +
    > > + __kvm_tdp_mmu_write_spte(iter->sptep, iter->old_spte & ~mask);
    > > + return iter->old_spte;
    > > +}
    > > +
    >
    > If you do end up sending another version of the series and feel like
    > breaking up this patch, you could probably split this part out since
    > the change to how we set the SPTE and how we handle that change are
    > somewhat independent. I like the switch to atomic64_fetch_and though.
    > No idea if it's faster, but I would believe it could be.

    The changes are actually dependent. Using the atomic-AND makes it
    impossible for KVM to clear a volatile bit that it wasn't intending to
    clear, and that is what enables simplifying the SPTE change
    propagation.

    Instead I would split out the opportunistic cleanup of
    @record_dirty_log to a separate patch, since that's dead-code cleanup
    and refactoring.

    \
     
     \ /
      Last update: 2023-03-27 00:11    [W:3.640 / U:0.160 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site