lkml.org 
[lkml]   [2012]   [Apr]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] kvm: lock slots_lock around device assignment
From
Date
On Wed, 2012-04-18 at 23:30 -0300, Marcelo Tosatti wrote:
> On Tue, Apr 17, 2012 at 09:46:44PM -0600, Alex Williamson wrote:
> > @@ -340,7 +343,11 @@ int kvm_iommu_unmap_guest(struct kvm *kvm)
> > if (!domain)
> > return 0;
> >
> > + mutex_lock(&kvm->slots_lock);
> > kvm_iommu_unmap_memslots(kvm);
> > + kvm->arch.iommu_domain = NULL;
> > + mutex_unlock(&kvm->slots_lock);
> > +
> > iommu_domain_free(domain);
> > return 0;
> > }
>
> This might trigger lockdep warnings due to
>
> kvm_vm_ioctl_create_vcpu
> mutex_lock(&kvm->lock)
> kvm_put_kvm(kvm)
> kvm_destroy_vm
> kvm_iommu_unmap_guest
>
> sequence.
>
> Better drop it, it is not necessary in vm destruction
> path (since only user is self).

I actually ran this with lockdep and didn't generate a warning;
hopefully I had it configured correctly. Also, we'll soon be unmapping
the guest any time we remove the last assigned device so this will no
longer be a vm destruction-only path. We can just as easily race adding
new mappings or removing already removed ones on that path. We also
acquire kvm->lock in the mapping path:

kvm_vm_ioctl_assign_device() {
mutex_lock(&kvm->lock);
if (!kvm->arch.iommu_domain) {
r = kvm_iommu_map_guest(kvm);

which by inspection and the lock ordering note in kvm_main seems to be
ok. Thanks,

Alex



\
 
 \ /
  Last update: 2012-04-19 05:01    [W:0.044 / U:0.376 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site