Messages in this thread Patch in this message | | | From | Pankaj Suryawanshi <> | Date | Sat, 6 Apr 2019 11:29:27 +0530 | Subject | vmscan.c: Reclaim unevictable pages. |
| |
Hello ,
shrink_page_list() returns , number of pages reclaimed, when pages is unevictable it returns VM_BUG_ON_PAGE(PageLRU(page) || PageUnevicatble(page),page);
We can add the unevictable pages in reclaim list in shrink_page_list(), return total number of reclaim pages including unevictable pages, let the caller handle unevictable pages.
I think the problem is shrink_page_list is awkard. If page is unevictable it goto activate_locked->keep_locked->keep lables, keep lable list_add the unevictable pages and throw the VM_BUG instead of passing it to caller while it relies on caller for non-reclaimed-non-unevictable page's putback. I think we can make it consistent so that shrink_page_list could return non-reclaimed pages via page_list and caller can handle it. As an advance, it could try to migrate mlocked pages without retrial.
Below is the issue i observed of CMA_ALLOC of large size buffer : (Kernel version - 4.14.65 With Android Pie.
[ 24.718792] page dumped because: VM_BUG_ON_PAGE(PageLRU(page) || PageUnevictable(page)) [ 24.726949] page->mem_cgroup:bd008c00 [ 24.730693] ------------[ cut here ]------------ [ 24.735304] kernel BUG at mm/vmscan.c:1350! [ 24.739478] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP ARM
Below is the patch which solved this issue :
diff --git a/mm/vmscan.c b/mm/vmscan.c index be56e2e..12ac353 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -998,7 +998,7 @@ static unsigned long shrink_page_list(struct list_head *page_list, sc->nr_scanned++;
if (unlikely(!page_evictable(page))) - goto activate_locked; + goto cull_mlocked;
if (!sc->may_unmap && page_mapped(page)) goto keep_locked; @@ -1331,7 +1331,12 @@ static unsigned long shrink_page_list(struct list_head *page_list, } else list_add(&page->lru, &free_pages); continue; - +cull_mlocked: + if (PageSwapCache(page)) + try_to_free_swap(page); + unlock_page(page); + list_add(&page->lru, &ret_pages); + continue; activate_locked: /* Not a candidate for swapping, so reclaim swap space. */ if (PageSwapCache(page) && (mem_cgroup_swap_full(page) ||
It fixes the below issue.
1. Large size buffer allocation using cma_alloc successful with unevictable pages.
cma_alloc of current kernel will fail due to unevictable page
Please let me know if anything i am missing.
Regards, Pankaj
| |