lkml.org 
[lkml]   [2022]   [Sep]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    SubjectRe: [PATCH V3 4/7] ublk_drv: requeue rqs with recovery feature enabled
    From
    On 2022/9/19 11:55, Ming Lei wrote:
    > On Tue, Sep 13, 2022 at 12:17:04PM +0800, ZiyangZhang wrote:
    >> With recovery feature enabled, in ublk_queue_rq or task work
    >> (in exit_task_work or fallback wq), we requeue rqs instead of
    >> ending(aborting) them. Besides, No matter recovery feature is enabled
    >> or disabled, we schedule monitor_work immediately.
    >>
    >> Signed-off-by: ZiyangZhang <ZiyangZhang@linux.alibaba.com>
    >> ---
    >> drivers/block/ublk_drv.c | 34 ++++++++++++++++++++++++++++++++--
    >> 1 file changed, 32 insertions(+), 2 deletions(-)
    >>
    >> diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
    >> index 23337bd7c105..b067f33a1913 100644
    >> --- a/drivers/block/ublk_drv.c
    >> +++ b/drivers/block/ublk_drv.c
    >> @@ -682,6 +682,21 @@ static void ubq_complete_io_cmd(struct ublk_io *io, int res)
    >>
    >> #define UBLK_REQUEUE_DELAY_MS 3
    >>
    >> +static inline void __ublk_abort_rq_in_task_work(struct ublk_queue *ubq,
    >> + struct request *rq)
    >> +{
    >> + pr_devel("%s: %s q_id %d tag %d io_flags %x.\n", __func__,
    >> + (ublk_queue_can_use_recovery(ubq)) ? "requeue" : "abort",
    >> + ubq->q_id, rq->tag, ubq->ios[rq->tag].flags);
    >> + /* We cannot process this rq so just requeue it. */
    >> + if (ublk_queue_can_use_recovery(ubq)) {
    >> + blk_mq_requeue_request(rq, false);
    >> + blk_mq_delay_kick_requeue_list(rq->q, UBLK_REQUEUE_DELAY_MS);
    >
    > Here you needn't to kick requeue list since we know it can't make
    > progress. And you can do that once before deleting gendisk
    > or the queue is recovered.

    No, kicking rq here is necessary.

    Consider USER_RECOVERY is enabled and everything goes well.
    User sends STOP_DEV, and we have kicked requeue list in
    ublk_stop_dev() and are going to call del_gendisk().
    However, a crash happens now. Then rqs may be still requeued
    by ublk_queue_rq() because ublk_queue_rq() sees a dying
    ubq_daemon. So del_gendisk() will hang because there are
    rqs leaving in requeue list and no one kicks them.

    BTW, kicking requeue list after requeue rqs is really harmless
    since we schedule quiesce_work immediately after finding a
    dying ubq_daemon. So few rqs have chance to be re-dispatched.

    >
    >> + } else {
    >> + blk_mq_end_request(rq, BLK_STS_IOERR);
    >> + }
    >> +}
    >> +
    >> static inline void __ublk_rq_task_work(struct request *req)
    >> {
    >> struct ublk_queue *ubq = req->mq_hctx->driver_data;
    >> @@ -704,7 +719,7 @@ static inline void __ublk_rq_task_work(struct request *req)
    >> * (2) current->flags & PF_EXITING.
    >> */
    >> if (unlikely(current != ubq->ubq_daemon || current->flags & PF_EXITING)) {
    >> - blk_mq_end_request(req, BLK_STS_IOERR);
    >> + __ublk_abort_rq_in_task_work(ubq, req);
    >> mod_delayed_work(system_wq, &ub->monitor_work, 0);
    >> return;
    >> }
    >> @@ -779,6 +794,21 @@ static void ublk_rq_task_work_fn(struct callback_head *work)
    >> __ublk_rq_task_work(req);
    >> }
    >>
    >> +static inline blk_status_t __ublk_abort_rq(struct ublk_queue *ubq,
    >> + struct request *rq)
    >> +{
    >> + pr_devel("%s: %s q_id %d tag %d io_flags %x.\n", __func__,
    >> + (ublk_queue_can_use_recovery(ubq)) ? "requeue" : "abort",
    >> + ubq->q_id, rq->tag, ubq->ios[rq->tag].flags);
    >> + /* We cannot process this rq so just requeue it. */
    >> + if (ublk_queue_can_use_recovery(ubq)) {
    >> + blk_mq_requeue_request(rq, false);
    >> + blk_mq_delay_kick_requeue_list(rq->q, UBLK_REQUEUE_DELAY_MS);
    >
    > Same with above.
    >
    >
    > Thanks,
    > Ming

    \
     
     \ /
      Last update: 2022-09-19 11:12    [W:2.442 / U:0.004 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site