lkml.org 
[lkml]   [2017]   [Jun]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] aio: Add command to wait completion of all requests
From
Date
On 13.06.2017 18:26, Benjamin LaHaise wrote:
> On Tue, Jun 13, 2017 at 06:11:03PM +0300, Kirill Tkhai wrote:
> ...
>> The functionality, I did, grew from real need and experience. We try to
>> avoid kernel modification, where it's possible, but the in-flight aio
>> requests is not a case suitable for that.
>
> What you've done only works for *your* use-case, but not in general. Like
> in other subsystems, you need to provide hooks on a per file descriptor
> basis for quiescing different kinds of file descriptors.

Which hooks do you suggest? It's possible there is no a file descriptor open after
a request is submitted. Do you suggest an interface to reopen a struct file?

> Your current
> patch set completely ignores things like usb gadget. You need to create
> infrastructure for restarting i/os after your checkpointing occurs, which
> you haven't put any thought into in this patchset. If you want to discuss
> how to do that, fine, but the approach in this patchset simply does not
> work in general. What happens when an aio doesn't complete or takes hours
> to complete?

Here is wait_event_interruptible(), but it's possible to convert it
in wait_event_interruptible_hrtimeout() like it's made in read_events().
It's not a deadly issue of patch. The function read_events() simply
waits for timeout, can't we do the same?

>> The checkpointing of live system is not easy as it seems. The way,
>> you suggested, makes impossible the whole class of doing snapshots
>> of the life system. You can't just kill a process, wait for zombie,
>> and then restart the process again: processes are connected in difficult
>> topologies of essences. You need to restore pgid and sid of the process,
>> the namespaces, shared files (CLONE_FILES and CLONE_FS). Everything
>> of this requires to be created in the certain order, and there is a
>> lot of rules and limitations. You can't just create the same process
>> in the same place: it's not easy, and it's just impossible. Your
>> suggestion kills the big class of use cases, and it's not suitable
>> in any way. You may refer to criu project site if you are interested
>> (criu.org).
>
> Point.
>
>> Benjamin, please, could you check this once again? We really need
>> this functionality, it's not empty desire. Lets speak about the
>> way we should implement it, if you don't like the patch.
>>
>> There are many functionality in kernel to support the concept
>> I described. Check out MSG_PEEK flag for receiving from socket
>> (see unix_dgram_recvmsg()), for example. AIO now is one of the
>> last barriers of full support of snapshots in criu.
> ...
>
> Then please start looking at the big picture and think about things other
> than short lived disk i/o. Without some design in the infrastructure to
> handle those cases, your solution is incomplete and will potentially leave
> us with complex and unsupportable semantics that don't actually solve the
> problem you're trying to solve.
>
> Some of the things to think about: you need infrastructure to restart an
> aio, which means you need some way of dumping aios that remain in flight,
> as otherwise your application will see aios cancelled during checkpointing
> that should no have been. You need to actually cancel aios. These
> details need to be addressed if checkpointing is going to be a robust
> feature that works for other than toy use-cases.

Could you please describe how will cancelling aio requests will help to wait
till their completion? Is there is guarantee, they will be easily queued back?
I suppose, no, because there are may be a memory limit or some low level
drivers limitations, dependent on internal conditions.

Also, it's not seems good to overload aio with the functionality of obtaining
closed file descriptors of submitted requests.

Do you mean this way, or I misunderstood you? Could you please to concretize your
idea?

In my vision cancelling requests does not allow to implement the need I described.
If we can't queue request back, it breaks snapshotting and user application in
general.

Thanks,
Kirill

\
 
 \ /
  Last update: 2017-06-13 18:18    [W:0.165 / U:0.028 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site