lkml.org 
[lkml]   [2009]   [Nov]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] x86: eliminate redundant/contradicting cache line size config options
>>> Nick Piggin <npiggin@suse.de> 16.11.09 05:14 >>>
>On Fri, Nov 13, 2009 at 11:54:40AM +0000, Jan Beulich wrote:
>> Rather than having X86_L1_CACHE_BYTES and X86_L1_CACHE_SHIFT (with
>> inconsistent defaults), just having the latter suffices as the former
>> can be easily calculated from it.
>>
>> To be consistent, also change X86_INTERNODE_CACHE_BYTES to
>> X86_INTERNODE_CACHE_SHIFT, and set it to 7 (128 bytes) for NUMA to
>> account for last level cache line size (which here matters more than
>> L1 cache line size).
>
>I think if we're going to set it to 7 (128B, for Pentium 4), then
>we should set the L1 cache shift as well? Most alignments to
>prevent cacheline pingpong use L1 cache shift for this anyway?

But for P4 L1_CACHE_SHIFT already is 7.

>The internode thing is really just a not quite well defined thing
>because internode cachelines are really expensive and really big
>on vsmp so they warrant trading off extra space on some critical
>structures to reduce pingpong (but this is not to say that other
>structures that are *not* internode annotated do *not* need to
>worry about pingpong).

The internode one, as said in the patch description, should consider
the last level cache line size rather than L1, which 128 seems to be
a much better fit (without in introducing model dependencies like
for L1) than just using the L1 value directly.

Jan



\
 
 \ /
  Last update: 2009-11-16 09:11    [W:0.085 / U:0.844 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site