[lkml]   [2004]   [Nov]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: [PATCH]: 1/4 batch mark_page_accessed()
    Marcelo Tosatti <> wrote:
    > Because the ordering of LRU pages should be enhanced in respect to locality,
    > with the mark_page_accessed batching you group together tasks accessed pages
    > and move them at once to the active list.
    > You maintain better locality ordering, while decreasing the precision of aging/
    > temporal locality.
    > Which should enhance disk writeout performance.

    I'll buy that explanation. Although I'm a bit sceptical that it is

    Was that particular workload actually performing significant amounts of
    writeout in vmscan.c? (We should have direct+kswapd counters for that, but
    we don't. /proc/vmstat:pgrotated will give us an idea).

    > On the other hand, without batching you mix the locality up in LRU - the LRU becomes
    > more precise in terms of "LRU aging", but less ordered in terms of sequential
    > access pattern.
    > The disk IO intensive reaim has very significant gain from the batching, its
    > probably due to the enhanced LRU ordering (what Nikita says).
    > The slowdown is probably due to the additional atomic_inc by page_cache_get().
    > Is there no way to avoid such page_cache_get there (and in lru_cache_add also)?

    Not really. The page is only in the pagevec at that time - if someone does
    a put_page() on it the page will be freed for real, and will then be
    spilled onto the LRU. Messy.
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to
    More majordomo info at
    Please read the FAQ at

     \ /
      Last update: 2005-03-22 14:08    [W:0.060 / U:4.416 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site