lkml.org 
[lkml]   [2011]   [Aug]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] memcg: remove unneeded preempt_disable
On Thu, Aug 18, 2011 at 10:26:58AM -0400, Valdis.Kletnieks@vt.edu wrote:
> On Thu, 18 Aug 2011 11:38:00 +0200, Johannes Weiner said:
>
> > Note that on non-x86, these operations themselves actually disable and
> > reenable preemption each time, so you trade a pair of add and sub on
> > x86
> >
> > - preempt_disable()
> > __this_cpu_xxx()
> > __this_cpu_yyy()
> > - preempt_enable()
> >
> > with
> >
> > preempt_disable()
> > __this_cpu_xxx()
> > + preempt_enable()
> > + preempt_disable()
> > __this_cpu_yyy()
> > preempt_enable()
> >
> > everywhere else.
>
> That would be an unexpected race condition on non-x86, if you expected _xxx and
> _yyy to be done together without a preempt between them. Would take mere
> mortals forever to figure that one out. :)

That should be fine, we don't require the two counters to be perfectly
coherent with respect to each other, which is the justification for
this optimization in the first place.

But on non-x86, the operation to increase a single per-cpu counter
(read-modify-write) itself is made atomic by disabling preemption.


\
 
 \ /
  Last update: 2011-08-18 16:45    [W:1.463 / U:0.032 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site