lkml.org 
[lkml]   [2018]   [Jun]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH] x86,switch_mm: skip atomic operations for init_mm
On Wed, Jun 6, 2018 at 12:00 PM Rik van Riel <riel@surriel.com> wrote:
>
> On Wed, 2018-06-06 at 11:17 -0700, Andy Lutomirski wrote:
> > On Sat, Jun 2, 2018 at 6:38 PM Rik van Riel <riel@surriel.com> wrote:
> > >
> > > On Sun, 2018-06-03 at 00:51 +0000, Song Liu wrote:
> > >
> > > > > Just to check: in the workload where you're seeing this
> > > > > problem,
> > > > > are
> > > > > you using an mm with many threads? I would imagine that, if
> > > > > you
> > > > > only
> > > > > have one or two threads, the bit operations aren't so bad.
> > > >
> > > > Yes, we are running netperf/netserver with 300 threads. We don't
> > > > see
> > > > this much overhead in with real workload.
> > >
> > > We may not, but there are some crazy workloads out
> > > there in the world. Think of some Java programs with
> > > thousands of threads, causing a million context
> > > switches a second on a large system.
> > >
> > > I like Andy's idea of having one cache line with
> > > a cpumask per node. That seems like it will have
> > > fewer downsides for tasks with fewer threads running
> > > on giant systems.
> > >
> > > I'll throw out the code I was working on, and look
> > > into implementing that :)
> > >
> >
> > I'm not sure you should throw your patch out. It's a decent idea,
> > too.
>
> Oh, I still have it saved, but the cpumask per
> NUMA node looks like it could have a big impact,
> with less guesswork or side effects.
>

Also, even with your other patch, we'd still have a win from the
improved data structure -- switching back and forth between init_mm
and something else is definitely not the only time we hammer the
cpumask cache lines.

\
 
 \ /
  Last update: 2018-06-06 21:23    [W:0.064 / U:0.152 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site