lkml.org 
[lkml]   [1999]   [Sep]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [patch] hashtable sizes for icache and dcache
    On Thu, 23 Sep 1999, Ingo Molnar wrote:
    > Historically, when the dcache/inode/buffer cache/page cache/tcp/etc hashes
    > were developed only one hashsize was picked, which was (hopefully) the
    > optimum for the memory size the original developer used. This worked
    > pretty well for the 4M-32M RAM range. In the last 2 years or so RAM prices
    > finally started to converge to their production costs, and bigger RAM
    > boxes started to appear. The symptom: (which i think Leonard Zubkoff or
    > Doug Ledford noticed more than a year ago) is that on big memory boxes
    > (512M, 1G, 2G RAM boxes) we waste many cycles on cache misses and list
    > walking.
    >
    > the wrong solution: to change HASH_BITS ad-hoc for one specific (big)
    > memory size. This slows down the 4M-32M RAM range (both through wasting
    > memory, and through having less cache-locality in that huge hashtable).
    > Those small-memory boxes are very important to Linux as well.

    i agree that a single fixed size was never right in any of these
    hashtables, for the reasons you mention. however, at risk of having my
    face stepped on, i'd like to point out that in early 2.3 kernels, the page
    cache hashtable size was increased from 11 to 15 bits before it was made
    dynamically sized in a later kernel version. i believe that in fact came
    from Ingo's write-through page cache patch?

    my only interest in pointing this out is that i feel that some of us are
    being unfair. to quote: "do i have to stop this car?"

    > This is David's (Chuk's) patch. Actually it turns out
    > that hashsize-heuristics are very subsystem-dependent, so David's patch
    > does hash sizing in every place differently, but still we needed some
    > central patch that does it in the same style in every important place, and
    > the few remaining places can now tune up to this methodology. Actually
    > providing such a complete patch is much harder than it looks like and
    > needs broad understanding of all subsystems and needs careful tuning,
    > probably this is the reason why it took a year or so for someone to take
    > up the issue :)

    i'm genuinely curious: can you give some examples of subsystem-specific
    tuning issues? the big difference i can see between these different
    hash-sizing algorithms is the factor by which to multiply physical memory
    size. is that not a reasonable place to start, and wouldn't it be a
    significant improvement on the current fixed-size hash tables? that is, i
    think it would be a step in the right direction to include a perhaps
    simple-minded dynamic hash-size patch to get an idea of what more needs to
    be done in each case, and what is able to be generalized.

    - Chuck Lever
    --
    corporate: <chuckl@netscape.com>
    personal: <chucklever@netscape.net> or <cel@monkey.org>

    The Linux Scalability project:
    http://www.citi.umich.edu/projects/linux-scalability/


    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.rutgers.edu
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2005-03-22 13:54    [W:3.550 / U:0.108 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site