lkml.org 
[lkml]   [2005]   [Feb]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [ANNOUNCE] hotplug-ng 001 release
    El Mon, 14 Feb 2005 18:04:00 -0500,
    Lee Revell <rlrevell@joe-job.com> escribió:

    > Last I heard Gentoo does not even do it by default.
    >
    > I don't see why so much effort goes into improving boot time on the
    > kernel side when the most obvious user space problem is ignored.


    There's stuff that it could be done in the kernel to help improving those numbers,
    IMHO.

    xp logs all the io done the first two minutes after booting. The next time it boots
    it tries to read all those files at once so the programs will find stuff in memory
    instead of having to do lots of small seeks. Some people in the linux field have
    got a list of the files used at startup and they've thrown it at a "readhead" script,
    which seems to help but IMHO it's somewhat "hacky" compared with the
    xp's trick. xp also does that when you start a program, it saves a log of all the
    io done and it preloads it efficiently at startup - it improves "cold-cache" loading
    times a _lot_. I haven't seen any alternative for that in the linux world, and
    being able to keep track of al the io done by a given process would fix that (some
    people has put used printk's for that, but i think it can be done better)

    Also, it analyzes all those io "logs" and defragments (in background every 3 days,
    and with low load without the user noticing it) the disk according to the _use_ of the
    systems. Linux kernel can keep a file unfragmented, but currently there's no way
    linux can do decisions like "this system starts openoffice, so I'm going to move the
    binaries to another place of the disk where they'll load faster" or "when X program
    uses /lib/libfoo.so it also uses /lib/libbar.so, so I'm going to put those two together
    in the disk because that will avoid seeks". Kernel only can keep a single file
    unfragmented, but it doesn't know about how several files must be (un)fragmented
    between them. Being able to defragment things seems to be the one fix that (even
    mac os x does it)

    Userspace is where the problem is, but it's not going to be fixed. Ever. If
    something, it's going to be worse - it's how software works. And even if you make
    openoffice "fast", you still could _improve_ things with the tricks described
    above. Disks are too slow, and things like demand-loading executables generate
    too many small seeks, and programs can't control demand-loading so I don't
    think userspace is the only with work to do.
    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2005-03-22 14:10    [W:0.023 / U:34.704 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site