lkml.org 
[lkml]   [2019]   [Jan]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 9/11] KVM/MMU: Flush tlb in the kvm_mmu_write_protect_pt_masked()
From
Date
On 04/01/19 09:54, lantianyu1986@gmail.com wrote:
> rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask),
> PT_PAGE_TABLE_LEVEL, slot);
> - __rmap_write_protect(kvm, rmap_head, false);
> + flush |= __rmap_write_protect(kvm, rmap_head, false);
>
> /* clear the first set bit */
> mask &= mask - 1;
> }
> +
> + if (flush && kvm_available_flush_tlb_with_range()) {
> + kvm_flush_remote_tlbs_with_address(kvm,
> + slot->base_gfn + gfn_offset,
> + hweight_long(mask));

Mask is zero here, so this probably won't work.

In addition, I suspect calling the hypercall once for every 64 pages is
not very efficient. Passing a flush list into
kvm_mmu_write_protect_pt_masked, and flushing in
kvm_arch_mmu_enable_log_dirty_pt_masked, isn't efficient either because
kvm_arch_mmu_enable_log_dirty_pt_masked is also called once per word.

I don't have any good ideas, except for moving the whole
kvm_clear_dirty_log_protect loop into architecture-specific code (which
is not the direction we want---architectures should share more code, not
less).

Paolo

> + flush = false;
> + }
> +

\
 
 \ /
  Last update: 2019-01-07 18:25    [W:0.205 / U:0.156 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site