lkml.org 
[lkml]   [2010]   [Apr]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 2/6] KVM MMU: fix kvm_mmu_zap_page() and its calling path


Avi Kivity wrote:

>
>> kvm->arch.n_free_mmu_pages = 0;
>> @@ -1589,7 +1589,8 @@ static void mmu_unshadow(struct kvm *kvm, gfn_t
>> gfn)
>> && !sp->role.invalid) {
>> pgprintk("%s: zap %lx %x\n",
>> __func__, gfn, sp->role.word);
>> - kvm_mmu_zap_page(kvm, sp);
>> + if (kvm_mmu_zap_page(kvm, sp))
>> + nn = bucket->first;
>> }
>> }
>>
>
> I don't understand why this is needed.

There is the code segment in mmu_unshadow():

|hlist_for_each_entry_safe(sp, node, nn, bucket, hash_link) {
| if (sp->gfn == gfn && !sp->role.direct
| && !sp->role.invalid) {
| pgprintk("%s: zap %lx %x\n",
| __func__, gfn, sp->role.word);
| kvm_mmu_zap_page(kvm, sp);
| }
| }

in the loop, if nn is zapped, hlist_for_each_entry_safe() access nn will
cause crash. and it's checked in other functions, such as kvm_mmu_zap_all(),
kvm_mmu_unprotect_page()...

Thanks,
Xiao



\
 
 \ /
  Last update: 2010-04-12 10:59    [W:0.081 / U:0.048 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site