lkml.org 
[lkml]   [2022]   [Aug]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: [PATCH Part2 v6 26/49] KVM: SVM: Add KVM_SEV_SNP_LAUNCH_UPDATE command
Date
[AMD Official Use Only - General]

Hello Sabin,

>> +static bool is_hva_registered(struct kvm *kvm, hva_t hva, size_t len)
>> +{
>> + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
>> + struct list_head *head = &sev->regions_list;
>> + struct enc_region *i;
>> +
>> + lockdep_assert_held(&kvm->lock);
>> +
>> + list_for_each_entry(i, head, list) {
>> + u64 start = i->uaddr;
>> + u64 end = start + i->size;
>> +
>> + if (start <= hva && end >= (hva + len))
>> + return true;
>> + }
>> +
>> + return false;
>> +}

>Since KVM_MEMORY_ENCRYPT_REG_REGION should be called for every memory region the user gives to kvm, is the regions_list any different from kvm's memslots?

Actually, the KVM_MEMORY_ENCRYPT_REG_REGION is being called and the regions_list is only being setup for the guest RAM blocks.

>> +
>> +static int snp_launch_update(struct kvm *kvm, struct kvm_sev_cmd
>> +*argp) {
>> + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
>> + struct sev_data_snp_launch_update data = {0};
>> + struct kvm_sev_snp_launch_update params;
>> + unsigned long npages, pfn, n = 0;
>> + int *error = &argp->error;
>> + struct page **inpages;
>> + int ret, i, level;
>> + u64 gfn;
>> +
>> + if (!sev_snp_guest(kvm))
>> + return -ENOTTY;
>> +
>> + if (!sev->snp_context)
>> + return -EINVAL;
>> +
>> + if (copy_from_user(&params, (void __user *)(uintptr_t)argp->data, sizeof(params)))
>> + return -EFAULT;
>> +
>> + /* Verify that the specified address range is registered. */
>> + if (!is_hva_registered(kvm, params.uaddr, params.len))
>> + return -EINVAL;
>> +
>> + /*
>> + * The userspace memory is already locked so technically we don't
>> + * need to lock it again. Later part of the function needs to know
>> + * pfn so call the sev_pin_memory() so that we can get the list of
>> + * pages to iterate through.
>> + */
>> + inpages = sev_pin_memory(kvm, params.uaddr, params.len, &npages, 1);
>> + if (!inpages)
>> + return -ENOMEM;

>sev_pin_memory will call pin_user_pages() which fails for PFNMAP vmas that you would get if you use memory allocated from an IO driver.
>Using gfn_to_pfn instead will make this work with vmas backed by pages or raw pfn mappings.

All the guest memory is being allocated via the userspace VMM, so how and where will we get memory allocated from an IO driver ?

Thanks,
Ashish

\
 
 \ /
  Last update: 2022-08-16 07:55    [W:0.262 / U:0.520 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site