lkml.org 
[lkml]   [2008]   [May]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
Subject[ANNOUNCE] Ceph distributed file system v0.2
Hello everyone,

Ceph is a distributed file system designed for performance, reliability,
and scalability. Basic features include:

* POSIX semantics
* Seamless scaling from 1 to many thousands of nodes
* No single point of failure
* N-way replication of data across storage nodes
* Fast recovery from node failures
* Automatic rebalancing of data on node addition/removal
* Easy deployment: most FS components are userspace daemons
* Linux kernel client, FUSE-based client, and user library

Notable in this release is a reasonably stable Linux kernel client. It
can extract and compile a kernel, and passes all tests in the fstest POSIX
regression test suite posted a few weeks back. It has stabilized to the
point where it could use some broader testing and code review.

More info at

http://ceph.newdream.net

Source code at

git://ceph.newdream.net/ceph.git
http://ceph.newdream.net/git

Since v0.1:

* fully functional (and reasonably stable) kernel client
* NFS re-export of a ceph client mount
* client metadata leases to keep client cache coherent
* crushtool for managing storage cluster topology
* improved support for storage cluster expansion
* some new tools for mkfs and management
* lots and lots of bug fixes...

Planned for v0.3:

* xattrs
* hardening distributed failure recovery
* large directory support (in client)
* recursive mtime and file size accounting

Some key things on the (medium- to long-term) roadmap:

* locks
* quotas
* btrfs for local object storage on storage nodes
* directory-granularity snapshots
* content-addressible storage
* strong security

sage


\
 
 \ /
  Last update: 2008-05-08 00:55    [W:0.022 / U:0.624 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site