Messages in this thread Patch in this message | | | From | Waiman Long <> | Subject | [PATCH v2 1/2] mm, thp: move invariant bug check out of loop in __split_huge_page_map | Date | Tue, 17 Jun 2014 18:37:58 -0400 |
| |
In the __split_huge_page_map() function, the check for page_mapcount(page) is invariant within the for loop. Because of the fact that the macro is implemented using atomic_read(), the redundant check cannot be optimized away by the compiler leading to unnecessary read to the page structure.
This patch moves the invariant bug check out of the loop so that it will be done only once. On a 3.16-rc1 based kernel, the execution time of a microbenchmark that broke up 1000 transparent huge pages using munmap() had an execution time of 38,245us and 38,548us with and without the patch respectively. The performance gain is about 1%.
Signed-off-by: Waiman Long <Waiman.Long@hp.com> --- mm/huge_memory.c | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index e60837d..be84c71 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1744,6 +1744,8 @@ static int __split_huge_page_map(struct page *page, if (pmd) { pgtable = pgtable_trans_huge_withdraw(mm, pmd); pmd_populate(mm, &_pmd, pgtable); + if (pmd_write(*pmd)) + BUG_ON(page_mapcount(page) != 1); haddr = address; for (i = 0; i < HPAGE_PMD_NR; i++, haddr += PAGE_SIZE) { @@ -1753,8 +1755,6 @@ static int __split_huge_page_map(struct page *page, entry = maybe_mkwrite(pte_mkdirty(entry), vma); if (!pmd_write(*pmd)) entry = pte_wrprotect(entry); - else - BUG_ON(page_mapcount(page) != 1); if (!pmd_young(*pmd)) entry = pte_mkold(entry); if (pmd_numa(*pmd)) -- 1.7.1
| |