lkml.org 
[lkml]   [2022]   [Sep]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: 10% regression in qperf tcp latency after introducing commit "4a61bf7f9b18 random: defer fast pool mixing to worker"
Hi Tejun,

On Wed, Sep 21, 2022 at 01:54:43PM -1000, Tejun Heo wrote:
> Hello,
>
> On Thu, Sep 22, 2022 at 12:32:49AM +0200, Jason A. Donenfeld wrote:
> > What are our options? Investigate queue_work_on() bottlenecks? Move back
> > to the original pattern, but use raw spinlocks? Some thing else?
>
> I doubt it's queue_work_on() itself if it's called at very high frequency as
> the duplicate calls would just fail to claim the PENDING bit and return but
> if it's being called at a high frequency, it'd be waking up a kthread over
> and over again, which can get pretty expensive. Maybe that ends competing
> with softirqd which is handling net rx or sth?

Huh, yea, interesting theory. Orrr, the one time that it _does_ pass the
test_and_set_bit check, the extra overhead here is enough to screw up
the latency? Both theories sound at least plausible.

> So, yeah, I'd try something which doesn't always involve scheduling and a
> context switch whether that's softirq, tasklet, or irq work.

Alright, I'll do that. I posted a diff for Sherry to try, and I'll make
that into a real patch and wait for her test.

> I probably am
> mistaken but I thought RT kernel pushes irq handling to threads so that
> these things can be handled sanely. Is this some special case?

It does mostly. But there's still a hard IRQ handler, somewhere, because
IRQs gotta IRQ, and the RNG benefits from getting a timestamp exactly
when that happens. So here we are.

Jason

\
 
 \ /
  Last update: 2022-09-22 18:46    [W:0.333 / U:1.444 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site