lkml.org 
[lkml]   [2019]   [Dec]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH RFC 04/15] KVM: Implement ring-based dirty memory tracking
From
Date
On 03/12/19 20:13, Sean Christopherson wrote:
> The setting of as_id is wrong, both with and without a vCPU. as_id should
> come from slot->as_id.

Which doesn't exist, but is an excellent suggestion nevertheless.

>> + /*
>> + * Put onto per vm ring because no vcpu context. Kick
>> + * vcpu0 if ring is full.
>> + */
>> + vcpu = kvm->vcpus[0];
>
> Is this a rare event?

Yes, every time a vCPU exit happens, the vCPU is supposed to reap the VM
ring as well. (Most of the time it will be empty, and while the reaping
of VM ring entries needs locking, the emptiness check doesn't).

Paolo

>> + ring = &kvm->vm_dirty_ring;
>> + indexes = &kvm->vm_run->vm_ring_indexes;
>> + is_vm_ring = true;
>> + }
>> +
>> + ret = kvm_dirty_ring_push(ring, indexes,
>> + (as_id << 16)|slot->id, offset,
>> + is_vm_ring);
>> + if (ret < 0) {
>> + if (is_vm_ring)
>> + pr_warn_once("vcpu %d dirty log overflow\n",
>> + vcpu->vcpu_id);
>> + else
>> + pr_warn_once("per-vm dirty log overflow\n");
>> + return;
>> + }
>> +
>> + if (ret)
>> + kvm_make_request(KVM_REQ_DIRTY_RING_FULL, vcpu);
>> +}
>

\
 
 \ /
  Last update: 2019-12-04 11:15    [W:0.361 / U:0.360 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site