lkml.org 
[lkml]   [2006]   [May]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 01/03] Cpuset: might sleep checking zones allowed fix
On Thu, May 18, 2006 at 08:12:07PM -0700, Paul Jackson wrote:
> David wrote:
> > Basically, Case B falls back to case A when the cpuset is
> > full. So my question really is whether we need to attempt
> > to allocaate within the cpuset for GFP_ATOMIC because
> > most of the time the local node will be within the cpuset
> > anyway....
> >
> > So that's what lead to me asking this - is there really a
> > noticable distinction between A and B, or is it just
> > cluttering up the code with needless complex logic?
>
> Perhaps I'm missing something, but that's what I thought you were
> realized tVhat I thought I already recognized and responded to your
> question, with an answer sympathetic to your concerns, and a possible
> patch to address them.

Perhaps we both are :/

email:
a communication medium where two people can agree
with each other but be unaware of this fact.

;)

FWIW, I don't play with cpusets much; I mostly come across them when
a cpuset goes OOM and the I/O subsystem hangs somewhere in a
filesystem, block or driver layer and they are typically due to
problems with GFP_ATOMIC allocations. So my POV is probably
different to yours.

> David wrote:
> > Why not simply check this is __cpuset_zone_allowed() and return
> > true? We shouldn't put the burden of getting this right on the
> > callers when it is something internal to the cpuset workings....
>
> The callers are already conscious of whether or not they can wait.
> For all of the callers of cpuset_zone_allowed() except __alloc_pages,
> they can very well wait, and such a check is noise.

5 of the 6 other callers use __GFP_HARDWALL so won't ever sleep.
Given that 4 of the 5 are in memory reclaim paths, that's probably
a good thing. To an outsider, this appears like __GFP_HARDWALL is
being used to ensure we don't sleep (i.e. GFP_ATOMIC semantics),
and this is one of the angles my reasoning comes from.

> In some programming contexts, I add redundancy for robustness, and
> in some contexts I minimize redundancy for lean and mean code. The
> kernel tends to be the latter, especially on important code paths. In

I often wish for better robustness while deep in the guts of
a crash dump from a machine that's gone OOM and hung.

Cheers,

Dave.
--
Dave Chinner
R&D Software Enginner
SGI Australian Software Group
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2006-05-19 10:58    [W:0.272 / U:0.616 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site