lkml.org 
[lkml]   [2022]   [Feb]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH RFC v1] random: do not take spinlocks in irq handler
FWIW, the biggest issue with this

On Fri, Feb 4, 2022 at 4:32 PM Jason A. Donenfeld <Jason@zx2c4.com> wrote:
> +static void mix_interrupt_randomness(struct work_struct *work)
> +{
[...]
> + if (unlikely(crng_init == 0)) {
> + if (crng_fast_load((u8 *)&fast_pool->pool, sizeof(fast_pool->pool)) > 0)
> + atomic_set(&fast_pool->count, 0);
> + else
> + atomic_and(~FAST_POOL_MIX_INFLIGHT, &fast_pool->count);
> + return;
> + }
[...]
> void add_interrupt_randomness(int irq)
> - if (unlikely(crng_init == 0)) {
> - if ((fast_pool->count >= 64) &&
> - crng_fast_load((u8 *)fast_pool->pool, sizeof(fast_pool->pool)) > 0) {
> - fast_pool->count = 0;
> - fast_pool->last = now;
> - }
> - return;

The point of crng_fast_load is to shuffle bytes into the crng as fast
as possible for very early boot usage. Deferring that to a workqueue
seems problematic. So I think at the very least _that_ part will have
to stay in the IRQ handler. That means we've still got a spinlock. But
at least it's a less problematic one than the input pool spinlock, and
perhaps we can deal with that some other way than this patch's
approach.

In other words, this approach for the calls to mix_pool_bytes, and a
different approach for that call to crng_fast_load.

Jason

\
 
 \ /
  Last update: 2022-02-04 17:00    [W:0.068 / U:0.140 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site