lkml.org 
[lkml]   [2022]   [Jul]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: [PATCH Part2 v6 28/49] KVM: SVM: Add KVM_SEV_SNP_LAUNCH_FINISH command
Date
[AMD Official Use Only - General]

Hello Peter,

>> The KVM_SEV_SNP_LAUNCH_FINISH finalize the cryptographic digest and
>> stores it as the measurement of the guest at launch.
>>
>> While finalizing the launch flow, it also issues the LAUNCH_UPDATE
>> command to encrypt the VMSA pages.

>Given the guest uses the SNP NAE AP boot protocol we were expecting that there would be some option to add vCPUs to the VM but mark them as "pending AP boot creation protocol" state. This would allow the LaunchDigest of a VM doesn't change >just because its vCPU count changes. Would it be possible to add a new add an argument to KVM_SNP_LAUNCH_FINISH to tell it which vCPUs to LAUNCH_UPDATE VMSA pages for or similarly a new argument for KVM_CREATE_VCPU?

But don't we want/need to measure all vCPUs using LAUNCH_UPDATE_VMSA before we issue SNP_LAUNCH_FINISH command ?

If we are going to add vCPUs and mark them as "pending AP boot creation" state then how are we going to do LAUNCH_UPDATE_VMSAs for them after SNP_LAUNCH_FINISH ?

int snp_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd
>> +*argp) {
>> + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
>> + struct sev_data_snp_launch_update data = {};
>> + int i, ret;
>> +
>> + data.gctx_paddr = __psp_pa(sev->snp_context);
>> + data.page_type = SNP_PAGE_TYPE_VMSA;
>> +
>> + for (i = 0; i < kvm->created_vcpus; i++) {
>> + struct vcpu_svm *svm =
>> + to_svm(xa_load(&kvm->vcpu_array, i));

> Why are we iterating over |created_vcpus| rather than using kvm_for_each_vcpu?

Yes we should be using kvm_for_each_vcpu(), that will also help avoid touching implementation
specific details and hide complexities such as xa_load(), locking requirements, etc.

Additionally, kvm_for_each_vcpu() works on online_cpus, but I think that is what we should
be considering at LAUNCH_UPDATE_VMSA time, via-a-vis created_vcpus.

>> + u64 pfn = __pa(svm->sev_es.vmsa) >> PAGE_SHIFT;
>> +
>> + /* Perform some pre-encryption checks against the VMSA */
>> + ret = sev_es_sync_vmsa(svm);
>> + if (ret)
>> + return ret;

>Do we need to take the 'vcpu->mutex' lock before modifying the vcpu,like we do for SEV-ES in sev_launch_update_vmsa()?

This is using the per-cpu vcpu_svm structure, but we may need to guard against the KVM vCPU ioctl requests, so yes it is
safer to take the 'vcpu->mutex' lock here.

>> + /*
>> + * If its an SNP guest, then VMSA was added in the RMP entry as
>> + * a guest owned page. Transition the page to hypervisor state
>> + * before releasing it back to the system.
>> + * Also the page is removed from the kernel direct map, so flush it
>> + * later after it is transitioned back to hypervisor state and
>> + * restored in the direct map.
>> + */
>> + if (sev_snp_guest(vcpu->kvm)) {
>> + u64 pfn = __pa(svm->sev_es.vmsa) >> PAGE_SHIFT;
>> +
>> + if (host_rmp_make_shared(pfn, PG_LEVEL_4K, false))
>> + goto skip_vmsa_free;

>Why not call host_rmp_make_shared with leak==true? This old VMSA page is now unusable IIUC.

Yes the old VMSA page is now unavailable and lost, so makes sense to call host_rmp_make_shared() with leak==true.

Thanks,
Ashish
\
 
 \ /
  Last update: 2022-07-12 00:42    [W:0.217 / U:2.332 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site