lkml.org 
[lkml]   [2011]   [Jun]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2 19/22] KVM: MMU: lockless walking shadow page table
On 06/29/2011 07:18 PM, Avi Kivity wrote:
> On 06/29/2011 02:16 PM, Xiao Guangrong wrote:
>> >> @@ -1767,6 +1874,14 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
>> >>
>> >> kvm_flush_remote_tlbs(kvm);
>> >>
>> >> + if (atomic_read(&kvm->arch.reader_counter)) {
>> >> + kvm_mmu_isolate_pages(invalid_list);
>> >> + sp = list_first_entry(invalid_list, struct kvm_mmu_page, link);
>> >> + list_del_init(invalid_list);
>> >> + call_rcu(&sp->rcu, free_pages_rcu);
>> >> + return;
>> >> + }
>> >> +
>> >
>> > I think we should do this unconditionally. The cost of ping-ponging the shared cache line containing reader_counter will increase with large smp counts. On the other hand, zap_page is very rare, so it can be a little slower. Also, less code paths = easier to understand.
>> >
>>
>> On soft mmu, zap_page is very frequently, it can cause performance regression in my test.
>
> Any idea what the cause of the regression is? It seems to me that simply deferring freeing shouldn't have a large impact.
>

I guess it is because the page is freed too frequently, i have done the test, it shows
about 3219 pages is freed per second

Kernbench performance comparing:

the origin way: 3m27.723
free all shadow page in rcu context: 3m30.519


\
 
 \ /
  Last update: 2011-06-29 13:51    [W:0.075 / U:0.172 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site