lkml.org 
[lkml]   [2018]   [May]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] percpu_ida: Use _irqsave() instead of local_irq_save() + spin_lock
On Sat, May 05, 2018 at 08:52:02AM -0700, Matthew Wilcox wrote:
> init and destroy seem to map to sbitmap_queue_init_node and
> sbitmap_queue_free. percpu_ida_free maps to sbitmap_queue_clear.

Hmm.

void sbitmap_queue_clear(struct sbitmap_queue *sbq, unsigned int nr,
unsigned int cpu)
{
sbitmap_clear_bit_unlock(&sbq->sb, nr);
sbq_wake_up(sbq);
if (likely(!sbq->round_robin && nr < sbq->sb.depth))
*per_cpu_ptr(sbq->alloc_hint, cpu) = nr;
}
EXPORT_SYMBOL_GPL(sbitmap_queue_clear);

If we free a tag on a CPU other than the one it's allocated on, that seems
like it's going to guarantee a cacheline pingpong. Is the alloc_hint
really that valuable? I'd be tempted to maintain the alloc_hint (if it's
at all valuable) as being just a hint for which word to look at first,
and only update it on allocation, rather than updating it on free.
Then we can drop the 'cpu' argument to sbitmap_queue_clear(), which
would help this conversion because the percpu_ida users don't know what
CPU their tag was allocated on.

\
 
 \ /
  Last update: 2018-05-07 15:48    [W:0.070 / U:9.508 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site