lkml.org 
[lkml]   [2015]   [Jan]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH 5/5] KVM: nVMX: Enable nested posted interrupt processing.
On Mon, Jan 19, 2015 at 7:43 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> Hi Wincy,
>
> there is only one thing that I don't understand in this patchset, and it is:
>
> On 16/01/2015 06:59, Wincy Van wrote:
>> + /*
>> + * if vcpu is in L2, we are fast enough to complete
>> + * before L1 changes/destroys vmcs12.
>> + */
>
> ... this comment. What do you mean exactly?
>

Hi, Paolo,

Actually, there is a race window between
vmx_deliver_nested_posted_interrupt and nested_release_vmcs12
since posted intr delivery is async:

cpu 1
cpu 2
(nested posted intr) (dest vcpu,
release vmcs12)
vmcs12 = get_vmcs12(vcpu);
if (!is_guest_mode(vcpu) || !vmcs12) {
r = -1;
goto out;
}


kunmap(vmx->nested.current_vmcs12_page);

......


oops! current vmcs12 is invalid.

However, we have already checked that the destination vcpu
is_in_guest_mode, and if L1
want to destroy vmcs12(in handle_vmptrld/clear, etc..), the dest vcpu
must have done a nested
vmexit and a non-nested vmexit(handle_vmptr***).

Hence, we can disable local interrupts while delivering nested posted
interrupts to make sure
we are faster than the destination vcpu. This is a bit tricky but it
an avoid that race. I think we
do not need to add a spin lock here. RCU does not fit this case, since
it will introduce a
new race window between the rcu handler and handle_vmptr**.

I am wondering that whether there is a better way : )

Thanks,

Wincy


\
 
 \ /
  Last update: 2015-01-19 14:21    [W:0.054 / U:0.560 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site