[lkml]   [1997]   [May]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: Raw devices (Was:Re: NTFS, FAT32, etc.)
    On Thu, 8 May 1997, Martin von Loewis wrote:

    > > O_SYNC gives synchronous writes but using O_SYNC is not the same
    > > as having raw devices. Raw device writes skip the buffer cache
    > > but may or may not be completed before return depending whether
    > > the kernel uses an intermediate buffer or not.
    > Now it's getting weird. So you are saying that the kernel might have
    > an extra internal buffer, which is not counted as buffer cache for
    > raw devices (this sounds like a terminology issue: any internal buffer
    > of the kernel for writing could be considered a 'cache').
    > Anyway, so raw devices do *not* guarantee IO finalization, which means
    > that in case of a crash, data written to a raw device might get lost?
    > This does not sound like a very usefull device...
    > Thanks,
    > Martin

    If I recall properly, this thread started because some database
    writers think that you need to have a physical write to the
    storage device in order to maintain "Database concurrency". This
    premise is flawed.

    A write to the physical hardware does not guarantee that the
    data is on a disk platter. Further, reading from the
    hardware does not imply that the data was actually read from
    the disk platter. This is because most all high quality,
    high speed disk drives buffer data internally. The extreme
    example is a RAID Array where the data may never be written
    to a physical disk until a power failure is detected.

    There is no essential difference between data buffered in
    the kernel or data buffered in the disk drive. There is
    always the possibility that data being written may not be
    able to be read at a later date.

    This is the reason for "Transaction Processing". Such
    processing is used for Banking Institutions, etc., where
    they keep track of interest earned to the nearest mill at
    the nearest millisecond.

    There are books written about transaction processing. It is
    a very important tool for database management. It is a way
    of recovering all information should the hardware fail.

    Basically, it involves assembling all the information
    necessary to roll-back a transaction should it fail.

    Then a specific time is used to "commit to" the transaction.
    This information is written someplace (usually a file called
    a journal file) and the file is closed. The time at which
    the transaction is committed is not the time at which the
    file is written or closed. The transaction time is an
    element within that transaction record.

    No further transactions are allowed on any of the records in
    that file until the transaction is complete although other
    transactions, involving other records, may be occurring.

    Eventually the actual results of the transaction are written
    to the database. The time at which this write occurred is
    then appended to the journal file. This time does not have
    to be accurate because it is used only for disaster
    recovery. The records contained within the journal file are
    then allowed to be used in other transactions.

    Eventually, the disk(s) containing the database files are
    backed up and archived. As each record is backed up, the
    journal file regarding that record is deleted after it too
    is backed up.

    The result is that any transaction occurring at any time can
    be rolled back and redone in the correct order regardless of
    any system crashes. Further, the time that the transaction
    occurred will always be the time at which it was committed,
    regardless of any intervening hardware problems.

    Now, some "Johnny come lately" coders, not to be confused
    with Software Engineers, think that you can bypass all that
    by doing physical writes to hardware. They are not only
    uninformed but dumb, i.e., stupid.

    If you read some book that says you can guarantee "database
    concurrency" or some other US$50 word, by doing physical
    writes, throw it away. It is wrong. If you have a College
    Professor who insists that this is true, change classes. If
    you have a boss that insists that this is true, change jobs.

    Now you have solved the problem. You can now write a
    database program that works. The key to writing any database
    program is to presume that the data goes to paper tape. With
    this in mind, you'll write an efficient high-speed process
    in which you let the operating systems worry about punching
    the holes.

    Now, it is possible to do a better job of implimenting a
    "Database storage area" than a file-system. For this, you
    would use a "raw device". Such devices exist in all UNIX type
    operating systems. Linux uses /dev/sda-z for SCSI devices.
    Sun uses /dev/rdsk/c0t0..... Again, to worry about how or when
    your records get written to the physical media will not be
    productive. You let the Operating System do it.

    Dick Johnson
    Richard B. Johnson
    Project Engineer
    Analogic Corporation
    Voice : (508) 977-3000 ext. 3754
    Fax : (508) 532-6097
    Modem : (508) 977-6870
    Ftp :
    Email :,
    Penguin : Linux version 2.1.35 on an i586 machine (66.15 BogoMips).
    Warning : I read unsolicited mail for $350.00 per hour. Supply billing address.

     \ /
      Last update: 2005-03-22 13:39    [W:0.025 / U:72.696 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site