lkml.org 
[lkml]   [2011]   [Dec]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCHSET] block, mempool, percpu: implement percpu mempool and fix blkcg percpu alloc deadlock
On Thu, Dec 22, 2011 at 05:14:32PM -0800, Andrew Morton wrote:

[..]
> Separately...
>
> Mixing mempools and percpu_alloc() in the proposed fashion seems a
> pretty poor fit. mempools are for high-frequency low-level allocations
> which have key characteristics: there are typically a finite number of
> elements in flight and we *know* that elements are being freed in a
> timely manner.
>
> This doesn't fit with percpu_alloc(), which is a very heavyweight
> operation requiring GFP_KERNEL and it doesn't fit with
> blkio_group_stats_cpu because blkio_group_stats_cpu does not have the
> "freed in a timely manner" behaviour.
>
> To resolve these things you've added the workqueue to keep the pool
> populated, which turns percpu_mempool into a quite different concept
> which happens to borrow some mempool code (not necessarily a bad thing).
> This will result in some memory wastage, keeping that pool full.

Tejun seems to have created one global pool for all the device with
minimum number of objects at 8. So keeping just 8 objects aside for
the whole system is not a significant memory wastage, IMHO. This is
much better then creating all the possible cgroup, device pair cgroup
irrespective of the fact whether IO is happening on that pair or not.

>
> More significantly, it's pretty unreliable: if the allocations outpace
> the kernel thread's ability to refill the pool, all we can do is to
> wait for the kernel thread to do some work. But we're holding
> low-level locks while doing that wait, which will block the kernel
> thread. Deadlock.

Sorry, I did not understand this point. In the IO path we don't wait
for kernel thread to do some work. If there are no elements in the pool,
we will simply return and not create the group and IO will be accounted to
root group. Once the new IO comes in, it will try again and if by
that time worker thread succeeded in filling the pool, we will allocate
the group.

So in tight memory situation we will not account the IO to actual cgroup
but to root cgroup. Once the memory situation eases, IO will be acccounted
to right cgroup.

Can't see any deadlock happening. Did I miss something?

Thanks
Vivek


\
 
 \ /
  Last update: 2011-12-23 16:19    [W:0.385 / U:2.448 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site