lkml.org 
[lkml]   [2005]   [Sep]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: 2.6.14-rc2-mm1
>> I must be being particularly dense today ... but:
>>
>> pcp->high = batch / 2;
>>
>> Looks like half the batch size to me, not the same?
>
> pcp->batch = max(1UL, batch/2); is the line of code that is setting the
> batch value for the cold pcp list. batch is just a number that we
> counted based on some parameters earlier.

Ah, OK, so I am being dense. Fair enough. But if there's a reason to do
that max, perhaps:

pcp->batch = max(1UL, batch/2);
pcp->high = pcp->batch;

would be more appropriate? Tradeoff is more frequent dump / fill against
better frag, I suppose (at least if we don't refill using higher order
allocs ;-)) which seems fair enough.

>> > In general, I think if a specific higher order ( > 0) request fails that
>> > has GFP_KERNEL set then at least we should drain the pcps.
>>
>> Mmmm. so every time we fork a process with 8K stacks, or allocate a frame
>> for jumbo ethernet, or NFS, you want to drain the lists? that seems to
>> wholly defeat the purpose.
>
> Not every time there is a request for higher order pages. That surely
> will defeat the purpose of pcps. But my suggestion is only to drain
> when the the global pool is not able to service the request. In the
> pathological case where the higher order and zero order requests are
> alternating you could have thrashing in terms of pages moving to pcp for
> them to move back to global list.

OK, seems fair enough. But there's multiple "harder and harder" attempts
within __alloc_pages to do that ... which one are you going for? just
before we OOM / fail the alloc? That'd be hard to argue with, though I'm
unsure what the locking is to dump out other CPUs queues - you going to
global IPI and ask them to do it - that'd seem to cause it to race to
refill (as you mention).

>> Could you elaborate on what the benefits were from this change in the
>> first place? Some page colouring thing on ia64? It seems to have way more
>> downside than upside to me.
>
> The original change was to try to allocate a higher order page to
> service a batch size bulk request. This was with the hope that better
> physical contiguity will spread the data better across big caches.

OK ... but it has an impact on fragmentation. How much benefit are you
getting?

M.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-09-28 00:02    [W:0.130 / U:0.552 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site