lkml.org 
[lkml]   [2009]   [Apr]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Block I/O tracking (was Re: [PATCH 3/9] bio-cgroup controller)
On Fri, Apr 17, 2009 at 08:27:25PM +0900, Fernando Luis Vázquez Cao wrote:
> Ryo Tsuruta wrote:
>> Hi,
>>
>> From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
>> Date: Fri, 17 Apr 2009 11:24:33 +0900
>>
>>> On Fri, 17 Apr 2009 10:49:43 +0900
>>> Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> wrote:
>>>
>>>> Hi,
>>>>
>>>> I have a few question.
>>>> - I have not yet fully understood how your controller are using
>>>> bio_cgroup. If my view is wrong please tell me.
>>>>
>>>> o In my view, bio_cgroup's implementation strongly depends on
>>>> page_cgoup's. Could you explain for what purpose does this
>>>> functionality itself should be implemented as cgroup subsystem?
>>>> Using page_cgoup and implementing tracking APIs is not enough?
>>> I'll definitely do "Nack" to add full bio-cgroup members to page_cgroup.
>>> Now, page_cgroup is 40bytes(in 64bit arch.) And all of them are allocated at
>>> boot time as memmap. (and add member to struct page is much harder ;)
>>>
>>> IIUC, feature for "tracking bio" is just necesary for pages for I/O.
>>> So, I think it's much better to add misc. information to struct bio not to the page.
>>> But, if people want to add "small hint" to struct page or struct page_cgroup
>>> for tracking buffered I/O, I'll give you help as much as I can.
>>> Maybe using "unused bits" in page_cgroup->flags is a choice with no overhead.
>>
>> In the case where the bio-cgroup data is allocated dynamically,
>> - Sometimes quite a large amount of memory get marked dirty.
>> In this case it requires more kernel memory than that of the
>> current implementation.
>> - The operation is expansive due to memory allocations and exclusive
>> controls by such as spinlocks.
>>
>> In the case where the bio-cgroup data is allocated by delayed
>> allocation, - It makes the operation complicated and expensive,
>> because
>> sometimes a bio has to be created in the context of other
>> processes, such as aio and swap-out operation.
>>
>> I'd prefer a simple and lightweight implementation. bio-cgroup only
>> needs 4bytes unlike memory controller. The reason why bio-cgroup chose
>> this approach is to minimize the overhead.
>
> Elaborating on Yoshikawa-san's comment, I would like to propose a
> generic I/O tracking mechanism that is not tied to all the cgroup
> paraphernalia. This approach has several advantages:
>
> - By using this functionality, existing I/O schedulers (well, some
> relatively minor changes would be needed) would be able to schedule
> buffered I/O properly.
>
> - The amount of memory consumed to do the tracking could be
> optimized according to the kernel configuration (do we really
> need struct page_cgroup when the cgroup memory controller or all
> of the cgroup infrastructure has been configured out?).
>
> The I/O tracking functionality would look something like the following:
>
> - Create an API to acquire the I/O context of a certain page, which is
> cgroup independent. For discussion purposes, I will assume that the
> I/O context of a page is the io_context of the task that dirtied the
> page (this can be changed if deemed necessary, though).
>
> - When cgroups are not being used, pages would be tracked using a
> pfn-indexed array of struct io_context (à la memcg's array of
> struct page_cgroup).

mmh... thinking in terms of io_context instead of task or cgroup. This
is not suitable for memcg anyway, that will also require the page_cgroup
infrastructure, at least for the per cgroup lru list I think. In any
case, as suggested by Kamezawa, we should do the best to reduce the size
of page_cgroup or any equivalent structure associated with every page
descriptor.

>
> - When cgroups are activated but the memory controller is not, we
> would have a pfn-indexed array of struct blkio_cgroup, which would
> have both a pointer to the corresponding io_context of the page and a
> reference to the cgroup it belongs to (most likely using css_id). The
> API offered by the I/O tracking mechanism would be extended so that
> the kernel can easily obtain not only the per-task io_context but also
> the cgroup a certain page belongs to. Please notice that by doing this
> we have all the information we need to schedule buffered I/O both at
> the cgroup-level and the task-level. From the memory usage point of
> view, memory controller-specific bits would be gone and to top it all
> we save one indirection level (since struct page_cgroup would be out
> of the picture).
>
> - When the memory controller is active we would have the
> pfn-indexed array of struct page_cgroup we have know plus a
> reference to the corresponding cgroup and io_context (yes, I
> still want to do proper scheduling of buffered I/O within a
> cgroup).

Have you considered if multiple cgroup subsystems (io-throttle, memcg,
etc.) want to use this feature at the same time? how to store a
reference to many different cgroup subsystems?

>
> - Finally, since bio entering the block layer can generate additional
> bios it is necessary to pass the I/O context information of original
> bio down to the new bios. For that stacking devices such as dm and
> those of that ilk will have to be modified. To improve performance I/O
> context information would be cached in bios (to achieve this we have
> to ensure that all bios that enter the block layer have the right I/O
> context information attached to it).

This is a very interesting feature IMHO. AFAIK at the moment only
dm-ioband, for its dm nature, is able to define rules for logical
devices (LVM, software RAID, etc).

>
> Yoshikawa-san and myself have been working on a patch-set that
> implements just this and we have reached that point where the kernel
> does not panic right after booting:), so we will be sending patches soon
> (hopefully this weekend).

Good! curious to see this patchset ;).

Thanks,
-Andrea
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2009-04-18 00:11    [W:0.645 / U:0.428 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site