lkml.org 
[lkml]   [2019]   [Jun]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] blk-mq: update hctx->cpumask at cpu-hotplug(Internet mail)
On Tue, Jun 25, 2019 at 02:14:46AM +0000, wenbinzeng(曾文斌) wrote:
> Hi Ming,
>
> > -----Original Message-----
> > From: Ming Lei <ming.lei@redhat.com>
> > Sent: Tuesday, June 25, 2019 9:55 AM
> > To: Wenbin Zeng <wenbin.zeng@gmail.com>
> > Cc: axboe@kernel.dk; keith.busch@intel.com; hare@suse.com; osandov@fb.com;
> > sagi@grimberg.me; bvanassche@acm.org; linux-block@vger.kernel.org;
> > linux-kernel@vger.kernel.org; wenbinzeng(曾文斌) <wenbinzeng@tencent.com>
> > Subject: Re: [PATCH] blk-mq: update hctx->cpumask at cpu-hotplug(Internet mail)
> >
> > On Mon, Jun 24, 2019 at 11:24:07PM +0800, Wenbin Zeng wrote:
> > > Currently hctx->cpumask is not updated when hot-plugging new cpus,
> > > as there are many chances kblockd_mod_delayed_work_on() getting
> > > called with WORK_CPU_UNBOUND, workqueue blk_mq_run_work_fn may run
> >
> > There are only two cases in which WORK_CPU_UNBOUND is applied:
> >
> > 1) single hw queue
> >
> > 2) multiple hw queue, and all CPUs in this hctx become offline
> >
> > For 1), all CPUs can be found in hctx->cpumask.
> >
> > > on the newly-plugged cpus, consequently __blk_mq_run_hw_queue()
> > > reporting excessive "run queue from wrong CPU" messages because
> > > cpumask_test_cpu(raw_smp_processor_id(), hctx->cpumask) returns false.
> >
> > The message means CPU hotplug race is triggered.
> >
> > Yeah, there is big problem in blk_mq_hctx_notify_dead() which is called
> > after one CPU is dead, but still run this hw queue to dispatch request,
> > and all CPUs in this hctx might become offline.
> >
> > We have some discussion before on this issue:
> >
> > https://lore.kernel.org/linux-block/CACVXFVN729SgFQGUgmu1iN7P6Mv5+puE78STz8hj
> > 9J5bS828Ng@mail.gmail.com/
> >
>
> There is another scenario, you can reproduce it by hot-plugging cpus to kvm guests via qemu monitor (I believe virsh setvcpus --live can do the same thing), for example:
> (qemu) cpu-add 1
> (qemu) cpu-add 2
> (qemu) cpu-add 3
>
> In such scenario, cpu 1, 2 and 3 are not visible at boot, hctx->cpumask doesn't get synced when these cpus are added.

It is CPU cold-plug, we suppose to support it.

The new added CPUs should be visible to hctx, since we spread queues
among all possible CPUs(), please see blk_mq_map_queues() and
irq_build_affinity_masks(), which is like static allocation on CPU
resources.

Otherwise, you might use an old kernel or there is bug somewhere.

>
> > >
> > > This patch added a cpu-hotplug handler into blk-mq, updating
> > > hctx->cpumask at cpu-hotplug.
> >
> > This way isn't correct, hctx->cpumask should be kept as sync with
> > queue mapping.
>
> Please advise what should I do to deal with the above situation? Thanks a lot.

As I shared in last email, there is one approach discussed, which seems
doable.

Thanks,
Ming

\
 
 \ /
  Last update: 2019-06-25 04:27    [W:0.045 / U:2.476 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site