lkml.org 
[lkml]   [2002]   [Jul]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: pte_chain_mempool-2.5.27-1
William Lee Irwin III wrote:
>
> This patch, in order to achieve more reliable and efficient allocation,
> converts the pte_chain freelist to use mempool, which in turn uses the
> slab allocator as a front-end.

Using slab seems like a good idea to me. It gives us the per-cpu
freelists and GC for free.

mempool? Guess so.

mempool is really designed for things like IO request structures.
BIOs, etc. Things which are guaranteed to have short lifecycles.
Things which make the "wait for some objects to be freed" loop
in mempool_alloc() reliable.

However when mempool went in, a bunch of developers (including
myself) went "oh goody" and reused mempool to add some buffering
to things like radix tree nodes, buffer_heads, pte_chains, etc.

This is inappropriate, because those objects have a very different
lifecycle.

For example, back when swap was using buffer_heads, I was getting
tasks locked up in mempool_alloc(GFP_NOIO), waiting for buffer_heads
to come free. But no buffer_heads were being freed because there was
no memory pressure any more - somebody had just done a truncate() or
an exit(), there was plenty of free memory, nobody was calling
try_to_free_buffers() and the mempool_alloc caller was in indefinite
sleep. Waiting for someone to free up a buffer_head.

We could fix this problem by changing the schedule() in mempool_alloc()
into a schedule_timeout(not much), but Ingo didn't seem to like that.
Perhaps because we're using mempool in ways for which it was not
designed.


> + pte_chain_pool = mempool_create(16*1024,
> + pte_chain_pool_alloc,
> + pte_chain_pool_free,
> + NULL);
> +

Be aware that mempool kmallocs a contiguous chunk of element
pointers. This statement is asking for a
kmalloc(16384 * sizeof(void *)), which is 128k. It will work,
but only just.

How did you engineer the size of this pool, btw? In the
radix_tree code, we made the pool enormous. It was effectively
halved in size when the ratnodes went to 64 slots, but I still
have the fun task of working out what the pool size should really
be. In retrospect it would have been smarter to make it really
small and then increase it later in response to tester feedback.
Suggest you do that here.

-
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:27    [W:0.063 / U:0.080 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site