lkml.org 
[lkml]   [2012]   [May]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 2/3] x86/flush_tlb: try flush_tlb_single one by one in flush_tlb_range
On 05/02/2012 05:38 PM, Borislav Petkov wrote:

> On Wed, May 02, 2012 at 05:24:09PM +0800, Alex Shi wrote:
>> For some of scenario, above equation can be modified as:
>> (512 - X) * 100ns(assumed TLB refill cost) = X * 140ns(assumed invlpg cost)
>>
>> When thread number less than cpu numbers, balance point can up to 1/2
>> TLB entries.
>>
>> When thread number is equal to cpu number with HT, on our SNB EP
>> machine, the balance point is 1/16 TLB entries, on NHM EP machine,
>> balance at 1/32. So, need to change FLUSHALL_BAR to 32.
>
> Are you saying you want to have this setting per family?


Set it according to CPU type is more precise, but looks ugly. I am
wondering if it worth to do. Maybe conservative selection is acceptable?

>

> Also, have you run your patches with other benchmarks beside your
> microbenchmark, say kernbench, SPEC<something>, i.e. some other
> multithreaded benchmark touching shared memory? Are you seeing any
> improvement there?


I tested oltp reading and specjbb2005 with openjdk. They should not much
flush_tlb_range calling. So, no clear improvement.
Do you know benchmarks which cause enough flush_tlb_range?

>
>> when thread number is bigger than cpu number, context switch eat all
>> improvement. the memory access latency is same as unpatched kernel.
>
> Also, how do you know in the kernel that the thread number is the number
> of all threads touching this shared mmapped region - there could be
> unrelated threads doing something else.


Believe we didn't need to know this, much more thread number just weaken
and cover the improvement. When the thread number goes down, the
performance gain appears. So, don't need care this.

Any more comments for this patchset?

>
> Thanks.
>




\
 
 \ /
  Last update: 2012-05-02 14:01    [W:0.101 / U:0.292 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site