lkml.org 
[lkml]   [2021]   [Feb]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v4 bpf-next 2/6] bpf: prevent deadlock from recursive bpf_task_storage_[get|delete]
On Mon, Feb 22, 2021 at 11:16 PM Song Liu <songliubraving@fb.com> wrote:
>
>
>
> > On Feb 22, 2021, at 10:21 PM, Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
> >
> > On Mon, Feb 22, 2021 at 5:23 PM Song Liu <songliubraving@fb.com> wrote:
> >>
> >> BPF helpers bpf_task_storage_[get|delete] could hold two locks:
> >> bpf_local_storage_map_bucket->lock and bpf_local_storage->lock. Calling
> >> these helpers from fentry/fexit programs on functions in bpf_*_storage.c
> >> may cause deadlock on either locks.
> >>
> >> Prevent such deadlock with a per cpu counter, bpf_task_storage_busy, which
> >> is similar to bpf_prog_active. We need this counter to be global, because
> >> the two locks here belong to two different objects: bpf_local_storage_map
> >> and bpf_local_storage. If we pick one of them as the owner of the counter,
> >> it is still possible to trigger deadlock on the other lock. For example,
> >> if bpf_local_storage_map owns the counters, it cannot prevent deadlock
> >> on bpf_local_storage->lock when two maps are used.
> >>
> >> Signed-off-by: Song Liu <songliubraving@fb.com>
> >> ---
> >> kernel/bpf/bpf_task_storage.c | 57 ++++++++++++++++++++++++++++++-----
> >> 1 file changed, 50 insertions(+), 7 deletions(-)
> >>
> >
> > [...]
> >
> >> @@ -109,7 +136,9 @@ static void *bpf_pid_task_storage_lookup_elem(struct bpf_map *map, void *key)
> >> goto out;
> >> }
> >>
> >> + bpf_task_storage_lock();
> >> sdata = task_storage_lookup(task, map, true);
> >> + bpf_task_storage_unlock();
> >> put_pid(pid);
> >> return sdata ? sdata->data : NULL;
> >> out:
> >> @@ -141,8 +170,10 @@ static int bpf_pid_task_storage_update_elem(struct bpf_map *map, void *key,
> >> goto out;
> >> }
> >>
> >> + bpf_task_storage_lock();
> >> sdata = bpf_local_storage_update(
> >> task, (struct bpf_local_storage_map *)map, value, map_flags);
> >
> > this should probably be container_of() instead of casting
>
> bpf_task_storage.c uses casting in multiple places. How about we fix it in a
> separate patch?
>

Sure, let's fix all uses in a separate patch. My point is that this
makes an assumption (that struct bpf_map map field is always the very
first one) which is not enforced and not documented in struct
bpf_local_storage_map.

> Thanks,
> Song
>
> >
> >> + bpf_task_storage_unlock();
> >>
> >> err = PTR_ERR_OR_ZERO(sdata);
> >> out:
> >> @@ -185,7 +216,9 @@ static int bpf_pid_task_storage_delete_elem(struct bpf_map *map, void *key)
> >> goto out;
> >> }
> >>
> >> + bpf_task_storage_lock();
> >> err = task_storage_delete(task, map);
> >> + bpf_task_storage_unlock();
> >> out:
> >> put_pid(pid);
> >> return err;
> >
> > [...]
>

\
 
 \ /
  Last update: 2021-02-23 08:21    [W:0.057 / U:0.300 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site