lkml.org 
[lkml]   [2022]   [Sep]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: [PATCH v2] random: use immediate per-cpu timer rather than workqueue for mixing fast pool
Date
From: Jason A. Donenfeld
> Sent: 26 September 2022 23:05
>
> Previously, the fast pool was dumped into the main pool peroidically in
> the fast pool's hard IRQ handler. This worked fine and there weren't
> problems with it, until RT came around. Since RT converts spinlocks into
> sleeping locks, problems cropped up. Rather than switching to raw
> spinlocks, the RT developers preferred we make the transformation from
> originally doing:
>
> do_some_stuff()
> spin_lock()
> do_some_other_stuff()
> spin_unlock()
>
> to doing:
>
> do_some_stuff()
> queue_work_on(some_other_stuff_worker)
>
> This is an ordinary pattern done all over the kernel. However, Sherry
> noticed a 10% performance regression in qperf TCP over a 40gbps
> InfiniBand card. Quoting her message:
>
> > MT27500 Family [ConnectX-3] cards:
> > Infiniband device 'mlx4_0' port 1 status:
> > default gid: fe80:0000:0000:0000:0010:e000:0178:9eb1
> > base lid: 0x6
> > sm lid: 0x1
> > state: 4: ACTIVE
> > phys state: 5: LinkUp
> > rate: 40 Gb/sec (4X QDR)
> > link_layer: InfiniBand
> >
> > Cards are configured with IP addresses on private subnet for IPoIB
> > performance testing.
> > Regression identified in this bug is in TCP latency in this stack as reported
> > by qperf tcp_lat metric:
> >
> > We have one system listen as a qperf server:
> > [root@yourQperfServer ~]# qperf
> >
> > Have the other system connect to qperf server as a client (in this
> > case, it’s X7 server with Mellanox card):
> > [root@yourQperfClient ~]# numactl -m0 -N0 qperf 20.20.20.101 -v -uu -ub --time 60 --wait_server 20 -
> oo msg_size:4K:1024K:*2 tcp_lat
>
> Rather than incur the scheduling latency from queue_work_on, we can
> instead switch to running on the next timer tick, on the same core,
> deferrably so. This also batches things a bit more -- once per jiffy --
> which is probably okay now that mix_interrupt_randomness() can credit
> multiple bits at once. It still puts a bit of pressure on fast_mix(),
> but hopefully that's acceptable.

I though NOHZ systems didn't take a timer interrupt every 'jiffy'.
If that is true what actually happens?

David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
\
 
 \ /
  Last update: 2022-09-27 09:43    [W:0.120 / U:1.012 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site