lkml.org 
[lkml]   [2017]   [Oct]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[PATCH] page_alloc.c: inline __rmqueue()
__rmqueue() is called by rmqueue_bulk() and rmqueue() under zone->lock
and that lock can be heavily contended with memory intensive applications.

Since __rmqueue() is a small function, inline it can save us some time.
With the will-it-scale/page_fault1/process benchmark, when using nr_cpu
processes to stress buddy:

On a 2 sockets Intel-Skylake machine:
base %change head
77342 +6.3% 82203 will-it-scale.per_process_ops

On a 4 sockets Intel-Skylake machine:
base %change head
75746 +4.6% 79248 will-it-scale.per_process_ops

This patch adds inline to __rmqueue().

Signed-off-by: Aaron Lu <aaron.lu@intel.com>
---
mm/page_alloc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0e309ce4a44a..c9605c7ebaf6 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2291,7 +2291,7 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype)
* Do the hard work of removing an element from the buddy allocator.
* Call me with the zone->lock already held.
*/
-static struct page *__rmqueue(struct zone *zone, unsigned int order,
+static inline struct page *__rmqueue(struct zone *zone, unsigned int order,
int migratetype)
{
struct page *page;
--
2.13.6
\
 
 \ /
  Last update: 2017-10-09 07:45    [W:0.067 / U:0.272 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site