lkml.org 
[lkml]   [1998]   [Jul]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    Subjectwild imbalance in kswapd state
    I'm trying to figure out how to restore performance for repeated
    page-cache intensive operations, and tried instrumenting the state
    transitions in do_try_to_free_page.

    The results were pretty interesting -- once the swapping starts, it
    stays parked in the shrink_mmap state until almost all of the page cache
    is gone. Some example results:

    Jul 20 10:27:30 acer kernel: kswapd: leaving state 0, success=4865
    Jul 20 10:27:30 acer kernel: kswapd: leaving state 1, success=0
    Jul 20 10:27:30 acer kernel: kswapd: leaving state 2, success=0
    Jul 20 10:27:30 acer kernel: kswapd: leaving state 0, success=59
    Jul 20 10:27:30 acer kernel: kswapd: leaving state 1, success=0
    Jul 20 10:27:30 acer kernel: kswapd: leaving state 2, success=6 <- swap
    Jul 20 10:27:31 acer kernel: kswapd: leaving state 0, success=84
    Jul 20 10:27:31 acer kernel: kswapd: leaving state 1, success=0
    Jul 20 10:27:31 acer kernel: kswapd: leaving state 2, success=2 <- swap
    Jul 20 10:27:31 acer kernel: kswapd: leaving state 0, success=3
    Jul 20 10:27:31 acer kernel: kswapd: leaving state 1, success=0
    Jul 20 10:27:31 acer kernel: kswapd: leaving state 2, success=0

    In this case there isn't a lot of memory available elsewhere, but there
    is some, and it doesn't seem very helpful to strip all the page cache
    before trying elsewhere. I'd rather see at least a periodic attempt to
    swap out.

    I'm currently experimenting with a patch that sets a maximum number of
    successes (e.g. 100) for the swap state and then forces a transition.
    Have others experimented with something like this and have any pros/cons
    to mention?

    I've attached a copy of the current patch.

    Regards,
    Bill--- linux-2.1.109/mm/vmscan.c.old Fri Jul 17 09:10:45 1998
    +++ linux-2.1.109/mm/vmscan.c Mon Jul 20 15:38:42 1998
    @@ -446,7 +446,7 @@
    */
    static int do_try_to_free_page(int gfp_mask)
    {
    - static int state = 0;
    + static int state = 0, success = 0;
    int i=6;
    int stop;

    @@ -457,24 +457,38 @@
    stop = 3;
    if (gfp_mask & __GFP_WAIT)
    stop = 0;
    -
    - if (((buffermem >> PAGE_SHIFT) * 100 > buffer_mem.borrow_percent * num_physpages)
    - || (page_cache_size * 100 > page_cache.borrow_percent * num_physpages))
    + /*
    + * If we're not in the shrink_mmap() state, check
    + * whether to borrow page or buffer cache.
    + */
    + if (state != 0 &&
    + (((buffermem >> PAGE_SHIFT) * 100 > buffer_mem.borrow_percent * num_physpages)
    + || (page_cache_size * 100 > page_cache.borrow_percent * num_physpages)))
    shrink_mmap(i, gfp_mask);

    switch (state) {
    do {
    case 0:
    - if (shrink_mmap(i, gfp_mask))
    + if (success < 100 && shrink_mmap(i, gfp_mask)) {
    + success++;
    return 1;
    + }
    + success = 0;
    state = 1;
    case 1:
    - if ((gfp_mask & __GFP_IO) && shm_swap(i, gfp_mask))
    + if (success < 100 && (gfp_mask & __GFP_IO) &&
    + shm_swap(i, gfp_mask)) {
    + success++;
    return 1;
    + }
    + success = 0;
    state = 2;
    case 2:
    - if (swap_out(i, gfp_mask))
    + if (success < 100 && swap_out(i, gfp_mask)) {
    + success++;
    return 1;
    + }
    + success = 0;
    state = 3;
    case 3:
    shrink_dcache_memory(i, gfp_mask);
    \
     
     \ /
      Last update: 2005-03-22 13:43    [W:0.031 / U:62.420 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site