lkml.org 
[lkml]   [2020]   [Jun]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH 10/12] KVM: arm64: Save stage2 PTE dirty status if it is coverred
Date
There are two types of operations will change PTE and may cover
dirty status set by hardware.

1. Stage2 PTE unmapping: Page table merging (revert of huge page
table dissolving), kvm_unmap_hva_range() and so on.

2. Stage2 PTE changing: including user_mem_abort(), kvm_mmu_notifier
_change_pte() and so on.

All operations above will invoke kvm_set_pte() finally. We should
save the dirty status into memslot bitmap.

Question: Should we acquire kvm_slots_lock when invoke mark_page_dirty?
It seems that user_mem_abort does not acquire this lock when invoke it.

Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
---
arch/arm64/kvm/mmu.c | 20 ++++++++++++++++++--
1 file changed, 18 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 898e272a2c07..a230fbcf3889 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -294,15 +294,23 @@ static void unmap_stage2_ptes(struct kvm *kvm, pmd_t *pmd,
{
phys_addr_t start_addr = addr;
pte_t *pte, *start_pte;
+ bool dirty_coverred;
+ int idx;

start_pte = pte = pte_offset_kernel(pmd, addr);
do {
if (!pte_none(*pte)) {
pte_t old_pte = *pte;

- kvm_set_pte(pte, __pte(0));
+ dirty_coverred = kvm_set_pte(pte, __pte(0));
kvm_tlb_flush_vmid_ipa(kvm, addr);

+ if (dirty_coverred) {
+ idx = srcu_read_lock(&kvm->srcu);
+ mark_page_dirty(kvm, addr >> PAGE_SHIFT);
+ srcu_read_unlock(&kvm->srcu, idx);
+ }
+
/* No need to invalidate the cache for device mappings */
if (!kvm_is_device_pfn(pte_pfn(old_pte)))
kvm_flush_dcache_pte(old_pte);
@@ -1388,6 +1396,8 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
pte_t *pte, old_pte;
bool iomap = flags & KVM_S2PTE_FLAG_IS_IOMAP;
bool logging_active = flags & KVM_S2_FLAG_LOGGING_ACTIVE;
+ bool dirty_coverred;
+ int idx;

VM_BUG_ON(logging_active && !cache);

@@ -1453,8 +1463,14 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
if (pte_val(old_pte) == pte_val(*new_pte))
return 0;

- kvm_set_pte(pte, __pte(0));
+ dirty_coverred = kvm_set_pte(pte, __pte(0));
kvm_tlb_flush_vmid_ipa(kvm, addr);
+
+ if (dirty_coverred) {
+ idx = srcu_read_lock(&kvm->srcu);
+ mark_page_dirty(kvm, addr >> PAGE_SHIFT);
+ srcu_read_unlock(&kvm->srcu, idx);
+ }
} else {
get_page(virt_to_page(pte));
}
--
2.19.1
\
 
 \ /
  Last update: 2020-06-16 11:37    [W:0.083 / U:0.432 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site