lkml.org 
[lkml]   [2003]   [Apr]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: objrmap and vmtruncate
    >> using nonlinear mappings adds the overhead of pte chains, which roughly
    >> doubles the pagetable overhead. (or companion pagetables, which triple
    >> the pagetable overhead) Purely RAM-wise the break-even point is at
    >> around 8 pages, 8 pte chain entries make up for 64 bytes of vma overhead.
    >> the biggest problem i can see is that we (well, the kernel) has to make a
    >> judgement of RAM footprint vs. algorithmic overhead, which is apples to
    >> oranges. Nonlinear vmas [or just linear vmas with pte chains installed],
    >> while being only O(N), double/triple the pagetable overhead. objrmap
    >> linear vmas, while having only the pagetable overhead, are O(N^2). [well,
    >> it's O(N*M)]
    >> RAM-footprint wise the boundary is clear: above 8 pages of granularity,
    >> vmas with objrmap cost less RAM than nonlinear mappings.
    >> CPU-time-wise the nonlinear mappings with pte chains always beat objrmap.
    >
    > There's definitely an argument brewing here. Large 32-bit is very space
    > conscious; the rest of the world is largely oblivious to these specific
    > forms of space consumption aside from those tight on space in general.

    However, the time consumption affects everybody. The overhead of pte-chains
    is very significant ... people seem to be conveniently forgetting that for
    some reason. Ingo's rmap_pages thing solves the lowmem space problem, but
    the time problem is still there, if not worse.

    Please don't create the impression that rmap methodologies are only an
    issue for large 32 bit machines - that's not true at all.

    People seem to be focused on one corner case of performance for objrmap ...
    If you want a countercase for pte-chain based rmap, try creating 1000
    processes in a machine with a decent amount of RAM. Make them share
    libraries (libc, etc), and then fork and exit in a FIFO rolling fashion.
    Just forking off a bunch of stuff (regular programs / shell scripts) that
    do similar amounts of work will presumably approximate this. Kernel
    compiles see large benefits here, for instance. Things that were less
    dominated by userspace calculations would see even bigger changes.

    I've not seen anything but a focused microbenchmark deliberately written
    for the job do better on pte-chain based rmap that partial objrmap yet. If
    we had something more realistic, it would become rather more interesting.

    M.
    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2005-03-22 13:34    [W:4.111 / U:0.004 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site