[lkml]   [2006]   [Feb]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    Subjecto_sync in vfat driver

    *** Is that aim being achieved by the current policy? ***

    As I understand it the old (<=2.6.11) sync model kept the data in sync
    without updating the FAT until later. This runs the risks of partial
    corruption of one or more files on pullout.

    The new model attempts to be more rigourous by updating the FAT every time
    a block of data is written. Thus the "hammering" of the physical memory
    hosting the FAT record.

    In view of the nature of flash memory this may actually be drastically
    increasing the chance that the whole FAT gets erased.
    part IV (end of a sage)

    If a pullout occurs during write , there is now a near 50% chance that
    this takes out the entire FAT.

    It would seem that the main advantage of this scheme is that it is so slow
    that it encourages users to turn it off. Presumably in the process of
    coming to that conclusion they will become aware of the need to run umount
    or the sync command before doing removing the device.

    = Danger of destroying hardware =

    It seems that there are well documented cases of this abusive rewriting of
    the FAT causing rapid and total premature failure of what Alan Cox refers
    to as "ultra-crap devices".

    There may be valid reasons of cost or miniaturisation that preclude the
    additional hardware found in more complex devices.

    Even if better quality devices may have some sort of paging mechanism
    which makes them more resistant to this sort of abuse, it does not seem
    good engineering practice to dismiss those that fail as "shite".

    There is nothing in the spec of vfat that suggests the FAT will be written
    10.000 during the writing of one large file. Indeed it is hard to imagine
    that any other implementation on any other OS or any previous linux kernel
    behaves like that.

    So should the hardware manufacturers have anticipated this particular
    driver implementation or should the kernel be more aware of the existing
    hardware that it purports to support.

    = The way forward =

    It would seem that the first step could be to revert to the 2.6.11
    behaviour which was more appropriate and probably safer even from the data
    point of view.

    I lack the knowlege and experience to produce reliable kernel code so I
    wont try. However, I have already seen a number of suggestions of how the
    old model could be improved. This post could be the starting point for a
    discussion of more robust techniques. In any case the coding is unlikely
    to be very complex given the existing , tested code base that is in place
    in 2.6.11

    Any new technique should probably aim to be applicable to larger devices
    as well. The 2G limit is artificial and is a tacit recognition of the
    precarity of the current code. USB hard disks are just as prone to
    accidental cable pullout. Some periodic or per file sync should probably
    be envisaged for the VFAT sync mount option.

    PS if anyone can tell me why I had to post this ten times and chop it into
    little bits it would be appreciated in not messing up the list in the

    I spent an hour reading the faq and I dont see anything taboo here.

    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to
    More majordomo info at
    Please read the FAQ at

     \ /
      Last update: 2006-02-27 00:11    [W:0.045 / U:0.052 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site