lkml.org 
[lkml]   [2018]   [Dec]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 3/4] dma-debug: Dynamically expand the dma_debug_entry pool
From
Date
On 04/12/2018 14:29, Christoph Hellwig wrote:
>> + for (retry_count = 0; ; retry_count++) {
>> + spin_lock_irqsave(&free_entries_lock, flags);
>> +
>> + if (num_free_entries > 0)
>> + break;
>>
>> spin_unlock_irqrestore(&free_entries_lock, flags);
>
> Taking a spinlock just to read a single integer value doesn't really
> help anything.

If the freelist is non-empty we break out with the lock still held in
order to actually allocate our entry - only if there are no free entries
left do we drop the lock in order to handle the failure. This much is
just the original logic shuffled around a bit (with the tweak that
testing num_free_entries seemed justifiably simpler than the original
list_empty() check).

>> +
>> + if (retry_count < DMA_DEBUG_DYNAMIC_RETRIES &&
>> + !prealloc_memory(DMA_DEBUG_DYNAMIC_ENTRIES))
>
> Don't we need GFP_ATOMIC here? Also why do we need the retries?

Ah, right, we may be outside our own spinlock, but of course the whole
DMA API call which got us here might be under someone else's and/or in a
non-sleeping context - I'll fix that.

The number of retries is just to bound the loop due to its inherent
raciness - since we drop the lock to create more entries, under
pathological conditions by the time we get back in to grab one they
could have all gone. 2 retries (well, strictly it's 1 try and 1 retry)
was an entirely arbitrary choice just to accommodate that happening very
occasionally by chance.

However, if the dynamic allocations need GFP_ATOMIC for external reasons
anyway, then I don't need the lock-juggling that invites that race in
the first place, and the whole loop disappears again. Neat!

Robin.

\
 
 \ /
  Last update: 2018-12-04 17:33    [W:2.169 / U:0.008 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site