lkml.org 
[lkml]   [2008]   [Oct]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: Update cacheline size on X86_GENERIC
Date
On Sunday 12 October 2008 01:01, Andi Kleen wrote:
> > No immediate ideas. Jens probably is a good person to cc. With direct IO
> > workloads, hd_struct should mostly only be touched in partition remapping
> > and IO accounting.
>
> I found it doubtful if grouping the rw and ro members together was a good
> idea.

If all members touched in the fastpath fit into one cacheline, then it
definitely is. Because if you put the rw members in another cacheline,
that line is still going to bounce just the same, but then you are just
going to take up one more cacheline with the ro members.


> > At this point, you would want to cacheline align hd_struct. So if you
>
> The problem is probably not false sharing, but simply cache misses because
> it's so big. I think.

If you line up all the commonly touched members into cacheline boundaries,
then presumably you have to assume the start of the struct has eg. cacheline
alignment. There are situations where you can almost cut down cacheline
footprint of a given data structure by 2x by aligning the items (eg. if we
align struct page to 64 bytes, then random access to mem_map array will be
almost 1/2 the cacheline footprint). I've always been interested in whether
"oltp" benefits from that (eg. define WANT_PAGE_VIRTUAL for x86-64).


\
 
 \ /
  Last update: 2008-10-12 07:59    [W:0.104 / U:0.032 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site