lkml.org 
[lkml]   [2022]   [Oct]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH v10 046/108] KVM: Add flags to struct kvm_gfn_range
    Date
    From: Isaku Yamahata <isaku.yamahata@intel.com>

    kvm_unmap_gfn_range() needs to know the reason of the callback for TDX.
    mmu notifier, set memattr ioctl or restrictedmem notifier. Based on the
    reason, TDX changes the behavior. For mmu notifier, it's the operation on
    shared memory slot to zap shared PTE. For set memattr, private<->shared
    conversion, zap the original PTE. For restrictedmem, it's a hint that TDX
    can ignore.

    Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
    ---
    include/linux/kvm_host.h | 8 +++++++-
    virt/kvm/kvm_main.c | 5 ++++-
    2 files changed, 11 insertions(+), 2 deletions(-)

    diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
    index 839d98d56632..b658803ea2c7 100644
    --- a/include/linux/kvm_host.h
    +++ b/include/linux/kvm_host.h
    @@ -247,12 +247,18 @@ int kvm_async_pf_wakeup_all(struct kvm_vcpu *vcpu);


    #if defined(KVM_ARCH_WANT_MMU_NOTIFIER) || defined(CONFIG_HAVE_KVM_RESTRICTED_MEM)
    +#define KVM_GFN_RANGE_FLAGS_RESTRICTED_MEM BIT(0)
    +#define KVM_GFN_RANGE_FLAGS_SET_MEM_ATTR BIT(1)
    struct kvm_gfn_range {
    struct kvm_memory_slot *slot;
    gfn_t start;
    gfn_t end;
    - pte_t pte;
    + union {
    + pte_t pte;
    + int attr;
    + };
    bool may_block;
    + unsigned int flags;
    };
    bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range);
    #endif
    diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
    index 3b05a3396f89..dda2f2ec4faa 100644
    --- a/virt/kvm/kvm_main.c
    +++ b/virt/kvm/kvm_main.c
    @@ -676,6 +676,7 @@ static __always_inline int __kvm_handle_hva_range(struct kvm *kvm,
    gfn_range.start = hva_to_gfn_memslot(hva_start, slot);
    gfn_range.end = hva_to_gfn_memslot(hva_end + PAGE_SIZE - 1, slot);
    gfn_range.slot = slot;
    + gfn_range.flags = 0;

    if (!locked) {
    locked = true;
    @@ -947,8 +948,9 @@ static void kvm_unmap_mem_range(struct kvm *kvm, gfn_t start, gfn_t end,
    int i;
    int r = 0;

    - gfn_range.pte = __pte(0);
    + gfn_range.attr = attr;
    gfn_range.may_block = true;
    + gfn_range.flags = KVM_GFN_RANGE_FLAGS_SET_MEM_ATTR;

    for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
    slots = __kvm_memslots(kvm, i);
    @@ -1074,6 +1076,7 @@ static void kvm_restrictedmem_invalidate_begin(struct restrictedmem_notifier *no
    gfn_range.slot = slot;
    gfn_range.pte = __pte(0);
    gfn_range.may_block = true;
    + gfn_range.flags = KVM_GFN_RANGE_FLAGS_RESTRICTED_MEM;

    if (kvm_unmap_gfn_range(kvm, &gfn_range))
    kvm_flush_remote_tlbs(kvm);
    --
    2.25.1
    \
     
     \ /
      Last update: 2022-10-30 07:28    [W:4.137 / U:0.008 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site