[lkml]   [1998]   [Dec]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: atomicity
    > 	open target file for writing
    > while target file not fully written
    > write until error
    > delete one of the small files at random
    > close target file
    > delete all of the small random files that remain
    > Are there any file systems around that will manage to resist fragmentation
    > if subjected to that?

    ext2fs will quite happily handle that situation (in fact its not an atypical
    pattern of I/O on a big multiuser box - consider someone doing a download
    as another user does an rm -r.

    ext2fs tries to grab linear chunks of disk and divides the disk into cylinder
    groups to also help to maintain locality. The BSD ffs papers [McKusik et al]
    describe this sort of stuff well.


    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to
    Please read the FAQ at

     \ /
      Last update: 2005-03-22 13:45    [W:0.017 / U:54.904 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site