lkml.org 
[lkml]   [2020]   [Jan]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH v4 bpf-next 7/9] libbpf: add libbpf support to batch ops
Date


On 1/14/20 11:13 AM, Andrii Nakryiko wrote:
> On Tue, Jan 14, 2020 at 10:54 AM Brian Vazquez <brianvv@google.com> wrote:
>>
>> On Tue, Jan 14, 2020 at 10:36 AM Andrii Nakryiko
>> <andrii.nakryiko@gmail.com> wrote:
>>>
>>> On Tue, Jan 14, 2020 at 8:46 AM Brian Vazquez <brianvv@google.com> wrote:
>>>>
>>>> From: Yonghong Song <yhs@fb.com>
>>>>
>>>> Added four libbpf API functions to support map batch operations:
>>>> . int bpf_map_delete_batch( ... )
>>>> . int bpf_map_lookup_batch( ... )
>>>> . int bpf_map_lookup_and_delete_batch( ... )
>>>> . int bpf_map_update_batch( ... )
>>>>
>>>> Signed-off-by: Yonghong Song <yhs@fb.com>
>>>> ---
>>>> tools/lib/bpf/bpf.c | 60 ++++++++++++++++++++++++++++++++++++++++
>>>> tools/lib/bpf/bpf.h | 22 +++++++++++++++
>>>> tools/lib/bpf/libbpf.map | 4 +++
>>>> 3 files changed, 86 insertions(+)
>>>>
>>>> diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
>>>> index 500afe478e94a..12ce8d275f7dc 100644
>>>> --- a/tools/lib/bpf/bpf.c
>>>> +++ b/tools/lib/bpf/bpf.c
>>>> @@ -452,6 +452,66 @@ int bpf_map_freeze(int fd)
>>>> return sys_bpf(BPF_MAP_FREEZE, &attr, sizeof(attr));
>>>> }
>>>>
>>>> +static int bpf_map_batch_common(int cmd, int fd, void *in_batch,
>>>> + void *out_batch, void *keys, void *values,
>>>> + __u32 *count,
>>>> + const struct bpf_map_batch_opts *opts)
>>>> +{
>>>> + union bpf_attr attr = {};
>>>> + int ret;
>>>> +
>>>> + if (!OPTS_VALID(opts, bpf_map_batch_opts))
>>>> + return -EINVAL;
>>>> +
>>>> + memset(&attr, 0, sizeof(attr));
>>>> + attr.batch.map_fd = fd;
>>>> + attr.batch.in_batch = ptr_to_u64(in_batch);
>>>> + attr.batch.out_batch = ptr_to_u64(out_batch);
>>>> + attr.batch.keys = ptr_to_u64(keys);
>>>> + attr.batch.values = ptr_to_u64(values);
>>>> + if (count)
>>>> + attr.batch.count = *count;
>>>> + attr.batch.elem_flags = OPTS_GET(opts, elem_flags, 0);
>>>> + attr.batch.flags = OPTS_GET(opts, flags, 0);
>>>> +
>>>> + ret = sys_bpf(cmd, &attr, sizeof(attr));
>>>> + if (count)
>>>> + *count = attr.batch.count;
>>>
>>> what if syscall failed, do you still want to assign *count then?
>>
>> Hi Andrii, thanks for taking a look.
>>
>> attr.batch.count should report the number of entries correctly
>> processed before finding and error, an example could be when you
>> provided a buffer for 3 entries and the map only has 1, ret is going
>> to be -ENOENT meaning that you traversed the map and you still want to
>> assign *count.
>
> ah, ok, tricky semantics :) if syscall failed before kernel got to
> updating count, I'm guessing it is guaranteed to preserve old value?
>
>>
>> That being said, the condition 'if (count)' is wrong and I think it
>> should be removed.
>
> So count is mandatory, right? In that case both `if (count)` checks are wrong.

A little bit history here. Some of early iterations may have operations
like:
delete the whole hash table
delete the hash table starting from a key to the end.
etc.
In such cases, user may not pass 'count' to kernel.

Now we do not support such scenarios and all supported cases
in this patch set requires 'count', so yes, we can make `count'
mandatory.

>
>>
>>>
>>>> +
>>>> + return ret;
>>>> +}
>>>> +
>>>
>>> [...]
\
 
 \ /
  Last update: 2020-01-14 22:35    [W:0.459 / U:0.180 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site