lkml.org 
[lkml]   [2008]   [Feb]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH] KVM swapping with MMU Notifiers V7
    On Sat, 16 Feb 2008 11:48:27 +0100 Andrea Arcangeli <andrea@qumranet.com> wrote:

    > +void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn,
    > + struct mm_struct *mm,
    > + unsigned long start, unsigned long end,
    > + int lock)
    > +{
    > + for (; start < end; start += PAGE_SIZE)
    > + kvm_mmu_notifier_invalidate_page(mn, mm, start);
    > +}
    > +
    > +static const struct mmu_notifier_ops kvm_mmu_notifier_ops = {
    > + .invalidate_page = kvm_mmu_notifier_invalidate_page,
    > + .age_page = kvm_mmu_notifier_age_page,
    > + .invalidate_range_end = kvm_mmu_notifier_invalidate_range_end,
    > +};

    So this doesn't implement ->invalidate_range_start().

    By what means does it prevent new mappings from being established in the
    range after core mm has tried to call ->invalidate_rande_start()?
    mmap_sem, I assume?


    > + /* set userspace_addr atomically for kvm_hva_to_rmapp */
    > + spin_lock(&kvm->mmu_lock);
    > + memslot->userspace_addr = userspace_addr;
    > + spin_unlock(&kvm->mmu_lock);

    are you sure? kvm_unmap_hva() and kvm_age_hva() read ->userspace_addr a
    single time and it doesn't immediately look like there's a need to take the
    lock here?




    \
     
     \ /
      Last update: 2008-02-16 12:11    [W:0.024 / U:29.172 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site