lkml.org 
[lkml]   [2016]   [Oct]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v4 5/5] x86, kvm: support vcpu preempted check
2016-10-24 16:39+0200, Paolo Bonzini:
> On 19/10/2016 19:24, Radim Krčmář wrote:
>>> > + if (vcpu->arch.st.msr_val & KVM_MSR_ENABLED)
>>> > + if (kvm_read_guest_cached(vcpu->kvm, &vcpu->arch.st.stime,
>>> > + &vcpu->arch.st.steal,
>>> > + sizeof(struct kvm_steal_time)) == 0) {
>>> > + vcpu->arch.st.steal.preempted = 1;
>>> > + kvm_write_guest_cached(vcpu->kvm, &vcpu->arch.st.stime,
>>> > + &vcpu->arch.st.steal,
>>> > + sizeof(struct kvm_steal_time));
>>> > + }
>> Please name this block of code. Something like
>> kvm_steal_time_set_preempted(vcpu);
>
> While at it:
>
> 1) the kvm_read_guest_cached is not necessary. You can rig the call to
> kvm_write_guest_cached so that it only writes vcpu->arch.st.steal.preempted.

I agree. kvm_write_guest_cached() always writes from offset 0, so we'd
want a new function that allows to specify a starting offset.

Using cached vcpu->arch.st.steal to avoid the read wouldn't be as good.

\
 
 \ /
  Last update: 2016-10-24 17:15    [W:0.069 / U:0.416 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site