[lkml]   [2010]   [Jun]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    Patch in this message
    Subject[PATCH V3 1/8] Cleancache: Documentation
    [PATCH V3 1/8] Cleancache: Documentation

    Add cleancache documentation to Documentation/vm and
    sysfs ABI documentation to Documentation/ABI

    Signed-off-by: Dan Magenheimer <>

    ABI/testing/sysfs-kernel-mm-cleancache | 11 +
    vm/cleancache.txt | 194 +++++++++++++++++++++
    2 files changed, 205 insertions(+)
    --- linux-2.6.35-rc2/Documentation/ABI/testing/sysfs-kernel-mm-cleancache 1969-12-31 17:00:00.000000000 -0700
    +++ linux-2.6.35-rc2-cleancache/Documentation/ABI/testing/sysfs-kernel-mm-cleancache 2010-06-11 09:10:25.000000000 -0600
    @@ -0,0 +1,11 @@
    +What: /sys/kernel/mm/cleancache/
    +Date: June 2010
    +Contact: Dan Magenheimer <>
    + /sys/kernel/mm/cleancache/ contains a number of files which
    + record a count of various cleancache operations
    + (sum across all filesystems):
    + succ_gets
    + failed_gets
    + puts
    + flushes
    --- linux-2.6.35-rc2/Documentation/vm/cleancache.txt 1969-12-31 17:00:00.000000000 -0700
    +++ linux-2.6.35-rc2-cleancache/Documentation/vm/cleancache.txt 2010-06-21 16:51:54.000000000 -0600
    @@ -0,0 +1,194 @@
    +Cleancache can be thought of as a page-granularity victim cache for clean
    +pages that the kernel's pageframe replacement algorithm (PFRA) would like
    +to keep around, but can't since there isn't enough memory. So when the
    +PFRA "evicts" a page, it first attempts to put it into a synchronous
    +concurrency-safe page-oriented "pseudo-RAM" device (such as Xen's Transcendent
    +Memory, aka "tmem", or in-kernel compressed memory, aka "zmem", or other
    +RAM-like devices) which is not directly accessible or addressable by the
    +kernel and is of unknown and possibly time-varying size. And when a
    +cleancache-enabled filesystem wishes to access a page in a file on disk,
    +it first checks cleancache to see if it already contains it; if it does,
    +the page is copied into the kernel and a disk access is avoided.
    +A FAQ is included below:
    +A cleancache "backend" that interfaces to this pseudo-RAM links itself
    +to the kernel's cleancache "frontend" by setting the cleancache_ops funcs
    +appropriately and the functions it provides must conform to certain
    +semantics as follows:
    +Most important, cleancache is "ephemeral". Pages which are copied into
    +cleancache have an indefinite lifetime which is completely unknowable
    +by the kernel and so may or may not still be in cleancache at any later time.
    +Thus, as its name implies, cleancache is not suitable for dirty pages.
    +Cleancache has complete discretion over what pages to preserve and what
    +pages to discard and when.
    +Mounting a cleancache-enabled filesystem should call "init_fs" to obtain a
    +pool id which, if positive, must be saved in the filesystem's superblock;
    +a negative return value indicates failure. A "put_page" will copy a
    +(presumably about-to-be-evicted) page into cleancache and associate it with
    +the pool id, the file inode, and a page index into the file. (The combination
    +of a pool id, an inode, and an index is sometimes called a "handle".)
    +A "get_page" will copy the page, if found, from cleancache into kernel memory.
    +A "flush_page" will ensure the page no longer is present in cleancache;
    +a "flush_inode" will flush all pages associated with the specified inode;
    +and, when a filesystem is unmounted, a "flush_fs" will flush all pages in
    +all inodes specified by the given pool id and also surrender the pool id.
    +A "init_shared_fs", like init, obtains a pool id but tells cleancache
    +to treat the pool as shared using a 128-bit UUID as a key. On systems
    +that may run multiple kernels (such as hard partitioned or virtualized
    +systems) that may share a clustered filesystem, and where cleancache
    +may be shared among those kernels, calls to init_shared_fs that specify the
    +same UUID will receive the same pool id, thus allowing the pages to
    +be shared. Note that any security requirements must be imposed outside
    +of the kernel (e.g. by "tools" that control cleancache). Or a
    +cleancache implementation can simply disable shared_init by always
    +returning a negative value.
    +If a get_page is successful on a non-shared pool, the page is flushed (thus
    +making cleancache an "exclusive" cache). On a shared pool, the page
    +is NOT flushed on a successful get_page so that it remains accessible to
    +other sharers. The kernel is responsible for ensuring coherency between
    +cleancache (shared or not), the page cache, and the filesystem, using
    +cleancache flush operations as required.
    +Note that cleancache must enforce put-put-get coherency and get-get
    +coherency. For the former, if two puts are made to the same handle but
    +with different data, say AAA by the first put and BBB by the second, a
    +subsequent get can never return the stale data (AAA). For get-get coherency,
    +if a get for a given handle fails, subsequent gets for that handle will
    +never succeed unless preceded by a successful put with that handle.
    +Last, cleancache provides no SMP serialization guarantees; if two
    +different Linux threads are simultaneously putting and flushing a page
    +with the same handle, the results are indeterminate.
    +Cleancache monitoring is done by sysfs files in the
    +/sys/kernel/mm/cleancache directory. The effectiveness of cleancache
    +can be measured (across all filesystems) with:
    +succ_gets - number of gets that were successful
    +failed_gets - number of gets that failed
    +puts - number of puts attempted (all "succeed")
    +flushes - number of flushes attempted
    +A backend implementatation may provide additional metrics.
    +1) Where's the value? (Andrew Morton)
    +Cleancache (and its sister code "frontswap") provide interfaces for
    +a new pseudo-RAM memory type that conceptually lies between fast
    +kernel-directly-addressable RAM and slower DMA/asynchronous devices.
    +Disallowing direct kernel or userland reads/writes to this pseudo-RAM
    +is ideal when data is transformed to a different form and size (such
    +as wiht compression) or secretly moved (as might be useful for write-
    +balancing for some RAM-like devices). Evicted page-cache pages (and
    +swap pages) are a great use for this kind of slower-than-RAM-but-much-
    +faster-than-disk pseudo-RAM and the cleancache (and frontswap)
    +"page-object-oriented" specification provides a nice way to read and
    +write -- and indirectly "name" -- the pages.
    +In the virtual case, the whole point of virtualization is to statistically
    +multiplex physical resources across the varying demands of multiple
    +virtual machines. This is really hard to do with RAM and efforts to
    +do it well with no kernel change have essentially failed (except in some
    +well-publicized special-case workloads). Cleancache -- and frontswap --
    +with a fairly small impact on the kernel, provide a huge amount
    +of flexibility for more dynamic, flexible RAM multiplexing.
    +Specifically, the Xen Transcendent Memory backend allows otherwise
    +"fallow" hypervisor-owned RAM to not only be "time-shared" between multiple
    +virtual machines, but the pages can be compressed and deduplicated to
    +optimize RAM utilization. And when guest OS's are induced to surrender
    +underutilized RAM (e.g. with "self-ballooning"), page cache pages
    +are the first to go, and cleancache allows those pages to be
    +saved and reclaimed if overall host system memory conditions allow.
    +2) Why does cleancache have its sticky fingers so deep inside the
    + filesystems and VFS? (Andrew Morton and Christophe Hellwig)
    +The core hooks for cleancache in VFS are in most cases a single line
    +and the minimum set are placed precisely where needed to maintain
    +coherency (via cleancache_flush operatings) between cleancache,
    +the page cache, and disk. All hooks compile into nothingness if
    +cleancache is config'ed off and turn into a function-pointer-
    +compare-to-NULL if config'ed on but no backend claims the ops
    +functions, or to a compare-struct-element-to-negative if a
    +backend claims the ops functions but a filesystem doesn't enable
    +Some filesystems are built entirely on top of VFS and the hooks
    +in VFS are sufficient, so don't require a "init_fs" hook; the
    +initial implementation of cleancache didn't provide this hook.
    +But for some filesystems (such as btrfs), the VFS hooks are
    +incomplete and one or more hooks in fs-specific code are required.
    +And for some other filesystems, such as tmpfs, cleancache may
    +be counterproductive. So it seemed prudent to require a filesystem
    +to "opt in" to use cleancache, which requires adding a hook in
    +each filesystem. Not all filesystems are supported by cleancache
    +only because they haven't been tested. The existing set should
    +be sufficient to validate the concept, the opt-in approach means
    +that untested filesystems are not affected, and the hooks in the
    +existing filesystems should make it very easy to add more
    +filesystems in the future.
    +3) Why not make cleancache asynchronous and batched so it can
    + more easily interface with real devices with DMA instead
    + of copying each individual page? (Minchan Kim)
    +The one-page-at-a-time copy semantics simplifies the implementation
    +on both the frontend and backend and also allows the backend to
    +do fancy things on-the-fly like page compression and
    +page deduplication. And since the data is "gone" (copied into/out
    +of the pageframe) before the cleancache get/put call returns,
    +a great deal of race conditions and potential coherency issues
    +are avoided. While the interface seems odd for a "real device"
    +or for real kernel-addressible RAM, it makes perfect sense for
    +4) Why is non-shared cleancache "exclusive"? And where is the
    + page "flushed" after a "get"? (Minchan Kim)
    +The main reason is to free up memory in pseudo-RAM and to avoid
    +unnecessary cleancache_flush calls. If you want inclusive,
    +the page can be "put" immediately following the "get". If
    +put-after-get for inclusive becomes common, the interface could
    +be easily extended to add a "get_no_flush" call.
    +The flush is done by the cleancache backend implementation.
    +5) What's the performance impact?
    +Performance analysis has been presented at OLS'09 and LCA'10.
    +Briefly, performance gains can be significant on most workloads,
    +especially when memory pressure is high (e.g. when RAM is
    +overcommitted in a virtual workload); and because the hooks are
    +invoked primarily in place of or in addition to a disk read/write,
    +overhead is negligible even in worst case workloads. Basically
    +cleancache replaces I/O with memory-copy-CPU-overhead; on older
    +single-core systems with slow memory-copy speeds, cleancache
    +has little value, but in newer multicore machines, especially
    +consolidated/virtualized machines, it has great value.
    +6) Does cleanache work with KVM?
    +The memory model of KVM is sufficiently different that a cleancache
    +backend may have little value for KVM. This remains to be tested,
    +especially in an overcommitted system.
    +7) Does cleancache work in userspace? It sounds useful for
    + memory hungry caches like web browsers. (Jamie Lokier)
    +No plans yet, though we agree it sounds useful, at least for
    +apps that bypass the page cache (e.g. O_DIRECT).
    +Last updated: Dan Magenheimer, June 21 2010

     \ /
      Last update: 2010-06-22 01:23    [W:0.045 / U:7.864 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site