lkml.org 
[lkml]   [2012]   [May]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    SubjectRe: [PATCH 2/3] x86/flush_tlb: try flush_tlb_single one by one in flush_tlb_range
    From
    On 2 May 2012 21:38, Alex Shi <alex.shi@intel.com> wrote:
    > On 05/02/2012 05:38 PM, Borislav Petkov wrote:
    >
    >> On Wed, May 02, 2012 at 05:24:09PM +0800, Alex Shi wrote:
    >>> For some of scenario, above equation can be modified as:
    >>> (512 - X) * 100ns(assumed TLB refill cost) = X * 140ns(assumed invlpg cost)

    It should not be that optimistic, because that equation assumes every
    unflushed entry saves a TLB refill too.

    I think it is always a good idea to make such fundamental primitives
    cheaper though.


    >> Also, have you run your patches with other benchmarks beside your
    >> microbenchmark, say kernbench, SPEC<something>, i.e. some other
    >> multithreaded benchmark touching shared memory? Are you seeing any
    >> improvement there?
    >
    >
    > I tested oltp reading and specjbb2005 with openjdk. They should not much
    > flush_tlb_range calling. So, no clear improvement.
    > Do you know benchmarks which cause enough flush_tlb_range?

    x86 does not do such invlpg flushing for munmap either, as far as I
    can see?

    It would be a little more work to make this happen, but it might show
    more benefit, provided glibc does not free too huge chunks at once,
    it should apply far more often.


    \
     
     \ /
      Last update: 2012-05-02 15:41    [W:0.024 / U:0.160 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site