lkml.org 
[lkml]   [2005]   [Dec]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    SubjectRe: [PATCH 01/16] mm: delayed page activation
    On Sun, Dec 04, 2005 at 03:11:28PM +0300, Nikita Danilov wrote:
    > Wu Fengguang writes:
    > > When a page is referenced the second time in inactive_list, mark it with
    > > PG_activate instead of moving it into active_list immediately. The actual
    > > moving work is delayed to vmscan time.
    > >
    > > This implies two essential changes:
    > > - keeps the adjecency of pages in lru;
    >
    > But this change destroys LRU ordering: at the time when shrink_list()
    > inspects PG_activate bit, information about order in which
    > mark_page_accessed() was called against pages is lost. E.g., suppose

    Thanks.
    But this order of re-access time may be pointless. In fact the original
    mark_page_accessed() is doing another inversion: inversion of page lifetime.
    In the word of CLOCK-Pro, a page first being re-accessed has lower
    inter-reference distance, and therefore should be better protected(if ignore
    possible read-ahead effects). If we move re-accessed pages immediately into
    active_list, we are pushing them closer to danger of eviction.

    btw, the current vmscan code clears PG_referenced flag when moving pages to
    active_list. I followed the convention by doing this in the patch:

    --- linux-2.6.15-rc2-mm1.orig/mm/vmscan.c
    +++ linux-2.6.15-rc2-mm1/mm/vmscan.c
    @@ -454,6 +454,12 @@ static int shrink_list(struct list_head
    if (PageWriteback(page))
    goto keep_locked;

    + if (PageActivate(page)) {
    + ClearPageActivate(page);
    + ClearPageReferenced(page);
    + goto activate_locked;
    + }
    +
    referenced = page_referenced(page, 1, sc->priority <= 0);
    /* In active use or really unfreeable? Activate it. */
    if (referenced && page_mapping_inuse(page))
    Though I have a strong feeling that with the extra PG_activate bit, the
    + ClearPageReferenced(page);
    line should be removed. That is, let the extra reference record live through it.
    The point is to smooth out the inter-reference distance. Imagine the following
    situation:

    - + - + + - - + -
    1 2 3 4 5
    +: reference time
    -: shrink_list time

    One page have an average inter-reference distance that is smaller than the
    inter-scan distance. But the distances vary a bit. Here we'd better let the
    reference count accumulate, or at the 3rd shrink_list time it will be evicted.
    Though it has a side effect of favoriting non-mmaped file a bit more than
    before, and I was not quite sure about it.

    > inactive list initially contained pages
    >
    > /* head */ (P1, P2, P3) /* tail */
    >
    > all of them referenced. Then mark_page_accessed(), is called against P1,
    > P2, and P3 (in that order). With the old code active list would end up
    >
    > /* head */ (P3, P2, P1) /* tail */
    >
    > which corresponds to LRU. With delayed page activation, pages are moved
    > to head of the active list in the order they are analyzed by
    > shrink_list(), which gives
    >
    > /* head */ (P1, P2, P3) /* tail */
    >
    > on the active list, that is _inverse_ LRU order.

    Thanks,
    Wu
    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2005-12-04 14:40    [W:7.469 / U:0.036 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site