lkml.org 
[lkml]   [2016]   [May]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: bpf: use-after-free in array_map_alloc
Hello,

On Tue, May 24, 2016 at 10:40:54AM +0200, Vlastimil Babka wrote:
> [+CC Marco who reported the CVE, forgot that earlier]
>
> On 05/23/2016 11:35 PM, Tejun Heo wrote:
> > Hello,
> >
> > Can you please test whether this patch resolves the issue? While
> > adding support for atomic allocations, I reduced alloc_mutex covered
> > region too much.
> >
> > Thanks.
>
> Ugh, this makes the code even more head-spinning than it was.

Locking-wise, it isn't complicated. It used to be a single mutex
protecting everything. Atomic alloc support required putting core
allocation parts under spinlock. It is messy because the two paths
are mixed in the same function. If we break out the core part to a
separate function and let the sleepable path call into that, it should
look okay, but that's for another patch.

Also, I think protecting chunk's lifetime w/ alloc_mutex is making it
a bit nasty. Maybe we should do per-chunk "extending" completion and
let pcpu_alloc_mutex just protect populating chunks.

> > @@ -435,6 +435,8 @@ static int pcpu_extend_area_map(struct pcpu_chunk *chunk, int new_alloc)
> > size_t old_size = 0, new_size = new_alloc * sizeof(new[0]);
> > unsigned long flags;
> >
> > + lockdep_assert_held(&pcpu_alloc_mutex);
>
> I don't see where the mutex gets locked when called via
> pcpu_map_extend_workfn? (except via the new cancel_work_sync() call below?)

Ah, right.

> Also what protects chunks with scheduled work items from being removed?

cancel_work_sync(), which now obviously should be called outside
alloc_mutex.

> > @@ -895,6 +897,9 @@ static void __percpu *pcpu_alloc(size_t size, size_t align, bool reserved,
> > return NULL;
> > }
> >
> > + if (!is_atomic)
> > + mutex_lock(&pcpu_alloc_mutex);
>
> BTW I noticed that
> bool is_atomic = (gfp & GFP_KERNEL) != GFP_KERNEL;
>
> this is too pessimistic IMHO. Reclaim is possible even without __GFP_FS and
> __GFP_IO. Could you just use gfpflags_allow_blocking(gfp) here?

vmalloc hardcodes GFP_KERNEL, so getting more relaxed doesn't buy us
much.

Thanks.

--
tejun

\
 
 \ /
  Last update: 2016-05-24 18:01    [W:0.116 / U:0.032 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site