lkml.org 
[lkml]   [2009]   [Dec]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC][mmotm][PATCH] percpu mm struct counter cache
On Fri, 4 Dec 2009 09:49:17 +0900
Minchan Kim <minchan.kim@gmail.com> wrote:

> On Fri, Dec 4, 2009 at 9:18 AM, KAMEZAWA Hiroyuki
> <kamezawa.hiroyu@jp.fujitsu.com> wrote:
> > Making read-side of this counter slower means making ps or top slower.
> > IMO, ps or top is too slow now and making them more slow is very bad.
>
> Also, we don't want to make regression in no-split-ptl lock system.
> Now, tick update cost is zero in no-split-ptl-lock system.
yes.
> but task switching is a little increased since compare instruction.
Ah,

+#ifdef USE_SPLIT_PTLOCKS
+extern void prepare_mm_switch(struct task_struct *prev,
+ struct task_struct *next);
+#else
+static inline prepare_mm_switch(struct task_struct *prev,
+ struct task_struct *next)
+{
+}
+#endif

makes costs zero.

> As you know, task-switching is rather costly function.
yes.

> I mind additional overhead in so-split-ptl lock system.
yes. here.
> I think we can remove the overhead completely.
>

I have another version of this patch, which switches curr_mmc.mm
lazilu in a page fault. But it requires some complicated rules.
I'll try it again rather than adding hooks in context-switch.

BTW, I'm wondering to export "curr_mmc" to other files. Maybe
there will be some more information nice to be cached per cpu+mm.

Thanks,
-Kame



\
 
 \ /
  Last update: 2009-12-04 02:05    [W:0.029 / U:1.092 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site