lkml.org 
[lkml]   [2008]   [May]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: POHMELFS high performance network filesystem. Transactions, failover, performance.
    Hi.

    On Tue, May 13, 2008 at 03:09:06PM -0400, Jeff Garzik (jeff@garzik.org) wrote:
    > This continues to be a neat and interesting project :)

    Thanks :)

    > Where is the best place to look at client<->server protocol?

    Hmm, in sources I think, I need to kick myself to write a somewhat good
    spec for the next release.

    Basically protocol contains of fixed sized header (struct netfs_cmd) and
    attached data, which size is embedded into above header. Simple commands
    are finished here (essentially all except write/create commands), you
    can check them in approrpiate address space/inode operations.
    Transactions follow netlink (which is very ugly but exceptionally
    extendible) protocol: there is main header (above structure), which
    holds size of the embedded data, which can be dereferenced as header/data
    parts, where each inner header corresponds to any command (except
    transaction header). So one can pack (upto 90 pages of data or different
    commands on x86, this is limit of the page size devoted to headers)
    requested number of commands into single 'frame' and submit it to
    system, which will care about atomicity of that request in regards of
    being either fully processed by one of the servers or dropped.

    > Are you planning to support the case where the server filesystem dataset
    > does not fit entirely on one server?

    Sure. First by allowing whole object to be placed on different servers
    (i.e. one subdir is on server1 and another on server2), probably in the
    future there will be added support for the same object being distributed
    to different servers (i.e. half of the big file on server1 and another
    half on server2).

    > What is your opinion of the Paxos algorithm?

    It is slow. But it does solve failure cases.
    So far POHMELFS does not work as distributed filesystem, so it should
    not care about it at all, i.e. at most in the very nearest future it
    will just have number of acceptors in paxos terminology (metadata
    servers in others) without need for active dynamical reconfiguration,
    so protocol will be greatly reduced, with addition of dynamical
    metadata cluster extension protocol will have to be extended.

    As practice shows, the smaller and simpler initial steps are, the better
    results eventually become :)

    --
    Evgeniy Polyakov


    \
     
     \ /
      Last update: 2008-05-13 22:55    [W:0.024 / U:0.176 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site