[lkml]   [2012]   [May]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: [PATCH v4 06/10] KVM: MMU: fast path of handling guest page fault
    On 05/03/2012 05:10 AM, Marcelo Tosatti wrote:

    > On Wed, May 02, 2012 at 01:39:51PM +0800, Xiao Guangrong wrote:
    >> On 04/29/2012 04:50 PM, Takuya Yoshikawa wrote:
    >>> On Fri, 27 Apr 2012 11:52:13 -0300
    >>> Marcelo Tosatti <> wrote:
    >>>> Yes but the objective you are aiming for is to read and write sptes
    >>>> without mmu_lock. That is, i am not talking about this patch.
    >>>> Please read carefully the two examples i gave (separated by "example)").
    >>> The real objective is not still clear.
    >>> The ~10% improvement reported before was on macro benchmarks during live
    >>> migration. At least, that optimization was the initial objective.
    >>> But at some point, the objective suddenly changed to "lock-less" without
    >>> understanding what introduced the original improvement.
    >>> Was the problem really mmu_lock contention?
    >> Takuya, i am so tired to argue the advantage of lockless write-protect
    >> and lockless O(1) dirty-log again and again.
    > His point is valid: there is a lack of understanding on the details of
    > the improvement.

    Actually, the improvement of lockless is that it can let vcpu to be parallel
    as possible.

    From the test result, lockless gains little improvement for unix-migration,
    in this case, the vcpus are almost idle (at least not busy).

    The large improvement is from dbench-migration, in this case, all vcpus are
    busy accessing memory which is write-protected by dirty-log. If you enable
    page-fault/fast-page-fault tracepoints, you can see huge number of page fault
    from different vcpu during the migration.

    > Did you see the pahole output on struct kvm? Apparently mmu_lock is
    > sharing a cacheline with read-intensive memslots pointer. It would be
    > interesting to see what are the effects of cacheline aligning mmu_lock.

    Yes, i see that. In my test .config, i have enabled
    CONFIG_DEBUG_SPINLOCK/CONFIG_DEBUG_LOCK_ALLOC, mmu-lock is not sharing cacheline
    with memslots. That means it is not a problem during my test.
    (BTW, pahole can not work on my box, it shows:
    die__process_function: DW_TAG_INVALID (0x4109) @ <0x12886> not handled!

    If we reorganize 'struct kvm', i guess it is good for kvm but it can not improve
    too much for migration. :)

     \ /
      Last update: 2012-05-03 14:21    [W:0.035 / U:32.720 seconds]
    ©2003-2017 Jasper Spaans. hosted at Digital OceanAdvertise on this site