lkml.org 
[lkml]   [2020]   [Jan]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH RFC 1/1] genirq: Make threaded handler use irq affinity for managed interrupt
From
Date
On 25/12/2019 00:48, Ming Lei wrote:
> On Tue, Dec 24, 2019 at 11:20:25AM +0000, Marc Zyngier wrote:
>> On 2019-12-24 01:59, Ming Lei wrote:
>>> On Mon, Dec 23, 2019 at 10:47:07AM +0000, Marc Zyngier wrote:
>>>> On 2019-12-23 10:26, John Garry wrote:
>>>>>>>>> I've also managed to trigger some of them now that I have
>>>>>>>> access to
>>>>>>>>> a decent box with nvme storage.
>>>>>>>>
>>>>>>>> I only have 2x NVMe SSDs when this occurs - I should not be
>>>>>>>> hitting this...
>>>>>>>>
>>>>>>>> Out of curiosity, have you tried
>>>>>>>>> with the SMMU disabled? I'm wondering whether we hit some
>>>>>>>> livelock
>>>>>>>>> condition on unmapping buffers...
>>>>>>>>
>>>>>>>> No, but I can give it a try. Doing that should lower the CPU
>>>>>>>> usage, though,
>>>>>>>> so maybe masks the issue - probably not.
>>>>>>>
>>>>>>> Lots of CPU lockup can is performance issue if there isn't
>>>>>>> obvious bug.
>>>>>>>
>>>>>>> I am wondering if you may explain it a bit why enabling SMMU
>>>> may
>>>>>>> save
>>>>>>> CPU a it?
>>>>>> The other way around. mapping/unmapping IOVAs doesn't comes for
>>>>>> free.
>>>>>> I'm trying to find out whether the NVMe map/unmap patterns
>>>> trigger
>>>>>> something unexpected in the SMMU driver, but that's a very long
>>>>>> shot.
>>>>>
>>>>> So I tested v5.5-rc3 with and without the SMMU enabled, and
>>>> without
>>>>> the SMMU enabled I don't get the lockup.
>>>>
>>>> OK, so my hunch wasn't completely off... At least we have something
>>>> to look into.
>>>>
>>>> [...]
>>>>
>>>>> Obviously this is not conclusive, especially with such limited
>>>>> testing - 5 minute runs each. The CPU load goes up when disabling
>>>> the
>>>>> SMMU, but that could be attributed to extra throughput (1183K ->
>>>>> 1539K) loading.
>>>>>
>>>>> I do notice that since we complete the NVMe request in irq
>>>> context,
>>>>> we also do the DMA unmap, i.e. talk to the SMMU, in the same
>>>> context,
>>>>> which is less than ideal.
>>>>
>>>> It depends on how much overhead invalidating the TLB adds to the
>>>> equation, but we should be able to do some tracing and find out.
>>>>
>>>>> I need to finish for the Christmas break today, so can't check
>>>> this
>>>>> much further ATM.
>>>>
>>>> No worries. May I suggest creating a new thread in the new year,
>>>> maybe
>>>> involving Robin and Will as well?
>>>
>>> Zhang Yi has observed the CPU lockup issue once when running heavy IO on
>>> single nvme drive, and please CC him if you have new patch to try.
>>
>> On which architecture? John was indicating that this also happen on x86.
>
> ARM64.
>
> To be honest, I never see such CPU lockup issue on x86 in case of running
> heavy IO on single NVMe drive.
>
>>
>>> Then looks the DMA unmap cost is too big on aarch64 if SMMU is involved.
>>
>> So far, we don't have any data suggesting that this is actually the case.
>> Also, other workloads (such as networking) do not exhibit this behaviour,
>> while being least as unmap-heavy as NVMe is.
>
> Maybe it is because networking workloads usually completes IO in softirq
> context, instead of hard interrupt context.
>
>>
>> If the cross-architecture aspect is confirmed, this points more into
>> the direction of an interaction between the NVMe subsystem and the
>> DMA API more than an architecture-specific problem.
>>
>> Given that we have so far very little data, I'd hold off any conclusion.
>
> We can start to collect latency data of dma unmapping vs nvme_irq()
> on both x86 and arm64.
>
> I will see if I can get a such box for collecting the latency data.

To reiterate what I mentioned before about IOMMU DMA unmap on x86, a key
difference is that by default it uses the non-strict (lazy) mode unmap,
i.e. we unmap in batches. ARM64 uses general default, which is strict
mode, i.e. every unmap results in an IOTLB fluch.

In my setup, if I switch to lazy unmap (set iommu.strict=0 on cmdline),
then no lockup.

Are any special IOMMU setups being used for x86, like enabling strict
mode? I don't know...

Thanks,
John

\
 
 \ /
  Last update: 2020-01-02 11:36    [W:0.199 / U:0.604 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site