Messages in this thread | | | Subject | Re: [PATCH 3/3] blk-mq: Use llist_head for blk_cpu_done | From | Sagi Grimberg <> | Date | Thu, 29 Oct 2020 13:03:26 -0700 |
| |
>>> Well, usb-storage obviously seems to do it, and the block layer >>> does not prohibit it. >> >> Also loop, nvme-tcp and then I stopped looking. >> Any objections about adding local_bh_disable() around it? > > To me it seems like the whole IPI plus potentially softirq dance is > a little pointless when completing from process context.
I agree.
> Sagi, any opinion on that from the nvme-tcp POV?
nvme-tcp should (almost) always complete from the context that matches the rq->mq_ctx->cpu as the thread that processes incoming completions (per hctx) should be affinitized to match it (unless cpus come and go).
So for nvme-tcp I don't expect blk_mq_complete_need_ipi to return true in normal operation. That leaves the teardowns+aborts, which aren't very interesting here.
I would note that nvme-tcp does not go to sleep after completing every I/O like how sebastian indicated usb does.
Having said that, today the network stack is calling nvme_tcp_data_ready in napi context (softirq) which in turn triggers the queue thread to handle network rx (and complete the I/O). It's been measured recently that running the rx context directly in softirq will save some latency (possible because nvme-tcp rx context is non-blocking).
So I'd think that patch #2 is unnecessary and just add overhead for nvme-tcp.. do note that the napi softirq cpu mapping depends on the RSS steering, which is unlikely to match rq->mq_ctx->cpu, hence if completed from napi context, nvme-tcp will probably always go to the IPI path.
| |