lkml.org 
[lkml]   [2012]   [May]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH v4 06/10] KVM: MMU: fast path of handling guest page fault
    On 05/03/2012 05:07 AM, Marcelo Tosatti wrote:


    >> 'entry' is not a problem since it is from atomically read-write as
    >> mentioned above, i need change this code to:
    >>
    >> /*
    >> * Optimization: for pte sync, if spte was writable the hash
    >> * lookup is unnecessary (and expensive). Write protection
    >> * is responsibility of mmu_get_page / kvm_sync_page.
    >> * Same reasoning can be applied to dirty page accounting.
    >> */
    >> if (!can_unsync && is_writable_pte(entry) /* Use 'entry' instead of '*sptep'. */
    >> goto set_pte
    >> ......
    >>
    >>
    >> if (is_writable_pte(entry) && !is_writable_pte(spte)) /* Use 'spte' instead of '*sptep'. */
    >> kvm_flush_remote_tlbs(vcpu->kvm);
    >
    > What is of more importance than the ability to verify that this or that
    > particular case are ok at the moment is to write code in such a way that
    > its easy to verify that it is correct.
    >
    > Thus the suggestion above:
    >
    > "scattered all over (as mentioned before, i think a pattern of read spte
    > once, work on top of that, atomically write and then deal with results
    > _everywhere_ (where mmu lock is held) is more consistent."
    >


    Marcelo, thanks for your time to patiently review/reply my mail.

    I am confused with ' _everywhere_ ', it means all of the path read/update
    spte? why not only verify the path which depends on is_writable_pte()?

    For the reason of "its easy to verify that it is correct"? But these
    paths are safe since it is not care PT_WRITABLE_MASK at all. What these
    paths care is the Dirty-bit and Accessed-bit are not lost, that is why
    we always treat the spte as "volatile" if it is can be updated out of
    mmu-lock.

    For the further development? We can add the delta comment for
    is_writable_pte() to warn the developer use it more carefully.

    It is also very hard to verify spte everywhere. :(

    Actually, the current code to care PT_WRITABLE_MASK is just for
    tlb flush, may be we can fold it into mmu_spte_update.
    [
    There are tree ways to modify spte, present -> nonpresent, nonpresent -> present,
    present -> present.

    But we only need care present -> present for lockless.
    ]

    /*
    * return true means we need flush tlbs caused by changing spte from writeable
    * to read-only.
    */
    bool mmu_update_spte(u64 *sptep, u64 spte)
    {
    u64 last_spte, old_spte = *sptep;
    bool flush = false;

    last_spte = xchg(sptep, spte);

    if ((is_writable_pte(last_spte) ||
    spte_has_updated_lockless(old_spte, last_spte)) &&
    !is_writable_pte(spte) )
    flush = true;

    .... track Drity/Accessed bit ...


    return flush
    }

    Furthermore, the style of "if (spte-has-changed) goto beginning" is feasible
    in set_spte since this path is a fast path. (i can speed up mmu_need_write_protect)




    \
     
     \ /
      Last update: 2012-05-03 14:01    [W:0.027 / U:34.032 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site