lkml.org 
[lkml]   [2019]   [Jun]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCHv3] mm/gup: speed up check_and_migrate_cma_pages() on huge page
On Tue, Jun 25, 2019 at 10:13:19PM +0800, Pingfan Liu wrote:
> Both hugetlb and thp locate on the same migration type of pageblock, since
> they are allocated from a free_list[]. Based on this fact, it is enough to
> check on a single subpage to decide the migration type of the whole huge
> page. By this way, it saves (2M/4K - 1) times loop for pmd_huge on x86,
> similar on other archs.
>
> Furthermore, when executing isolate_huge_page(), it avoid taking global
> hugetlb_lock many times, and meanless remove/add to the local link list
> cma_page_list.
>
> Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Ira Weiny <ira.weiny@intel.com>
> Cc: Mike Rapoport <rppt@linux.ibm.com>
> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: John Hubbard <jhubbard@nvidia.com>
> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Keith Busch <keith.busch@intel.com>
> Cc: Mike Kravetz <mike.kravetz@oracle.com>
> Cc: Linux-kernel@vger.kernel.org
> ---
> v2 -> v3: fix page order to size convertion
>
> mm/gup.c | 19 ++++++++++++-------
> 1 file changed, 12 insertions(+), 7 deletions(-)
>
> diff --git a/mm/gup.c b/mm/gup.c
> index ddde097..03cc1f4 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -1342,19 +1342,22 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
> LIST_HEAD(cma_page_list);
>
> check_again:
> - for (i = 0; i < nr_pages; i++) {
> + for (i = 0; i < nr_pages;) {
> +
> + struct page *head = compound_head(pages[i]);
> + long step = 1;
> +
> + if (PageCompound(head))
> + step = 1 << compound_order(head) - (pages[i] - head);

Check your precedence here.

step = (1 << compound_order(head)) - (pages[i] - head);

Ira

> /*
> * If we get a page from the CMA zone, since we are going to
> * be pinning these entries, we might as well move them out
> * of the CMA zone if possible.
> */
> - if (is_migrate_cma_page(pages[i])) {
> -
> - struct page *head = compound_head(pages[i]);
> -
> - if (PageHuge(head)) {
> + if (is_migrate_cma_page(head)) {
> + if (PageHuge(head))
> isolate_huge_page(head, &cma_page_list);
> - } else {
> + else {
> if (!PageLRU(head) && drain_allow) {
> lru_add_drain_all();
> drain_allow = false;
> @@ -1369,6 +1372,8 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk,
> }
> }
> }
> +
> + i += step;
> }
>
> if (!list_empty(&cma_page_list)) {
> --
> 2.7.5
>

\
 
 \ /
  Last update: 2019-06-25 19:53    [W:0.099 / U:0.212 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site