lkml.org 
[lkml]   [2020]   [Feb]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation
    Thanks for this work, please see below.

    On Wed, Feb 19, 2020 at 06:04:24PM +0000, Anchal Agarwal wrote:
    > On Tue, Feb 18, 2020 at 10:16:11AM +0100, Roger Pau Monné wrote:
    > > On Mon, Feb 17, 2020 at 11:05:53PM +0000, Anchal Agarwal wrote:
    > > > On Mon, Feb 17, 2020 at 11:05:09AM +0100, Roger Pau Monné wrote:
    > > > > On Fri, Feb 14, 2020 at 11:25:34PM +0000, Anchal Agarwal wrote:
    > > > > > From: Munehisa Kamata <kamatam@amazon.com
    > > > > >
    > > > > > Add freeze, thaw and restore callbacks for PM suspend and hibernation
    > > > > > support. All frontend drivers that needs to use PM_HIBERNATION/PM_SUSPEND
    > > > > > events, need to implement these xenbus_driver callbacks.
    > > > > > The freeze handler stops a block-layer queue and disconnect the
    > > > > > frontend from the backend while freeing ring_info and associated resources.
    > > > > > The restore handler re-allocates ring_info and re-connect to the
    > > > > > backend, so the rest of the kernel can continue to use the block device
    > > > > > transparently. Also, the handlers are used for both PM suspend and
    > > > > > hibernation so that we can keep the existing suspend/resume callbacks for
    > > > > > Xen suspend without modification. Before disconnecting from backend,
    > > > > > we need to prevent any new IO from being queued and wait for existing
    > > > > > IO to complete.
    > > > >
    > > > > This is different from Xen (xenstore) initiated suspension, as in that
    > > > > case Linux doesn't flush the rings or disconnects from the backend.
    > > > Yes, AFAIK in xen initiated suspension backend takes care of it.
    > >
    > > No, in Xen initiated suspension backend doesn't take care of flushing
    > > the rings, the frontend has a shadow copy of the ring contents and it
    > > re-issues the requests on resume.
    > >
    > Yes, I meant suspension in general where both xenstore and backend knows
    > system is going under suspension and not flushing of rings.

    backend has no idea the guest is going to be suspended. Backend code
    is completely agnostic to suspension/resume.

    > That happens
    > in frontend when backend indicates that state is closing and so on.
    > I may have written it in wrong context.

    I'm afraid I'm not sure I fully understand this last sentence.

    > > > > > +static int blkfront_freeze(struct xenbus_device *dev)
    > > > > > +{
    > > > > > + unsigned int i;
    > > > > > + struct blkfront_info *info = dev_get_drvdata(&dev->dev);
    > > > > > + struct blkfront_ring_info *rinfo;
    > > > > > + /* This would be reasonable timeout as used in xenbus_dev_shutdown() */
    > > > > > + unsigned int timeout = 5 * HZ;
    > > > > > + int err = 0;
    > > > > > +
    > > > > > + info->connected = BLKIF_STATE_FREEZING;
    > > > > > +
    > > > > > + blk_mq_freeze_queue(info->rq);
    > > > > > + blk_mq_quiesce_queue(info->rq);
    > > > > > +
    > > > > > + for (i = 0; i < info->nr_rings; i++) {
    > > > > > + rinfo = &info->rinfo[i];
    > > > > > +
    > > > > > + gnttab_cancel_free_callback(&rinfo->callback);
    > > > > > + flush_work(&rinfo->work);
    > > > > > + }
    > > > > > +
    > > > > > + /* Kick the backend to disconnect */
    > > > > > + xenbus_switch_state(dev, XenbusStateClosing);
    > > > >
    > > > > Are you sure this is safe?
    > > > >
    > > > In my testing running multiple fio jobs, other test scenarios running
    > > > a memory loader works fine. I did not came across a scenario that would
    > > > have failed resume due to blkfront issues unless you can sugest some?
    > >
    > > AFAICT you don't wait for the in-flight requests to be finished, and
    > > just rely on blkback to finish processing those. I'm not sure all
    > > blkback implementations out there can guarantee that.
    > >
    > > The approach used by Xen initiated suspension is to re-issue the
    > > in-flight requests when resuming. I have to admit I don't think this
    > > is the best approach, but I would like to keep both the Xen and the PM
    > > initiated suspension using the same logic, and hence I would request
    > > that you try to re-use the existing resume logic (blkfront_resume).
    > >
    > > > > I don't think you wait for all requests pending on the ring to be
    > > > > finished by the backend, and hence you might loose requests as the
    > > > > ones on the ring would not be re-issued by blkfront_restore AFAICT.
    > > > >
    > > > AFAIU, blk_mq_freeze_queue/blk_mq_quiesce_queue should take care of no used
    > > > request on the shared ring. Also, we I want to pause the queue and flush all
    > > > the pending requests in the shared ring before disconnecting from backend.
    > >
    > > Oh, so blk_mq_freeze_queue does wait for in-flight requests to be
    > > finished. I guess it's fine then.
    > >
    > Ok.
    > > > Quiescing the queue seemed a better option here as we want to make sure ongoing
    > > > requests dispatches are totally drained.
    > > > I should accept that some of these notion is borrowed from how nvme freeze/unfreeze
    > > > is done although its not apple to apple comparison.
    > >
    > > That's fine, but I would still like to requests that you use the same
    > > logic (as much as possible) for both the Xen and the PM initiated
    > > suspension.
    > >
    > > So you either apply this freeze/unfreeze to the Xen suspension (and
    > > drop the re-issuing of requests on resume) or adapt the same approach
    > > as the Xen initiated suspension. Keeping two completely different
    > > approaches to suspension / resume on blkfront is not suitable long
    > > term.
    > >
    > I agree with you on overhaul of xen suspend/resume wrt blkfront is a good
    > idea however, IMO that is a work for future and this patch series should
    > not be blocked for it. What do you think?

    It's not so much that I think an overhaul of suspend/resume in
    blkfront is needed, it's just that I don't want to have two completely
    different suspend/resume paths inside blkfront.

    So from my PoV I think the right solution is to either use the same
    code (as much as possible) as it's currently used by Xen initiated
    suspend/resume, or to also switch Xen initiated suspension to use the
    newly introduced code.

    Having two different approaches to suspend/resume in the same driver
    is a recipe for disaster IMO: it adds complexity by forcing developers
    to take into account two different suspend/resume approaches when
    there's no need for it.

    Thanks, Roger.

    \
     
     \ /
      Last update: 2020-02-20 09:46    [W:3.529 / U:0.040 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site