[lkml]   [2003]   [Apr]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: Swap Compression

    *********** REPLY SEPARATOR ***********

    On 4/28/2003 at 6:25 PM Timothy Miller wrote:

    >rmoser wrote:
    >>So what's the best way to do this? I was originally thinking like this:
    >>Grab some swap data
    >>Stuff it into fcomp_push()
    >>When you have 100k of data, seal it up
    >>Write that 100k block
    >>But does swap compress in 64k blocks? 80x86 has 64k segments.
    >>The calculation for compression performance loss, 256/size_of_input,
    >>gives a loss of 0.003906 (0.3906%) for a dataset 65536 bytes long.
    >>So would it be better to just compress segments, whatever size they
    >>may be, and index those? This would, of course, be much more efficient
    >>in terms of finding the data to uncompress. (And system dependant)
    >We're not using DOS here. x86 has arbitrarily-sized segments. It is
    >the PAGES you want to swap, and they are typically 4k.
    >BTW, I see swap compression as being more valuable for over-loaded
    >servers than for PDA's. Yes, I see the advantage for a PDA, but you
    >only get any significant benefit out of this if you have the horsepower
    >and RAM required for doing heavy-weight compression. If you can't
    >compress a page very much, you don't benefit much from swapping it to RAM.

    Feh. Selectable compression is the eventual goal. If you want serious
    heavy-weight compression, go with fcomp2. Ahh heck I'll attatch the
    .plan file for fcomp2; this is rediculously RAM and CPU intensive in good
    implimentation, so don't even think for a second that you'll want to use
    this on your cpu-overloaded boxen.

    And no implimenting this right away, it's too much to code!

    --Bluefox Icy
    [unhandled content-type:application/octet-stream]
     \ /
      Last update: 2005-03-22 13:35    [W:0.021 / U:7.768 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site