Messages in this thread | | | From | Alexander Shishkin <> | Subject | Re: [PATCH v2] perf/core: fix mlock accounting in perf_mmap() | Date | Thu, 23 Jan 2020 11:33:47 +0200 |
| |
Song Liu <songliubraving@fb.com> writes:
> sysctl_perf_event_mlock and user->locked_vm can change value > independently, so we can't guarantee:
Looks good, I still have some suggestions below.
> > user->locked_vm <= user_lock_limit > > When user->locked_vm is larger than user_lock_limit, we cannot simply > update extra and user_extra as: > > extra = user_locked - user_lock_limit; > user_extra -= extra; > > Otherwise, user_extra will be negative. In extreme cases, this may lead to > negative user->locked_vm (until this perf-mmap is closed), which break > locked_vm badly. > > Fix this by adjusting user_locked before calculating extra and user_extra.
The commit message is just talking about the code. We can see the code when we scroll down to the diff. What this can be instead is:
1. Problem statement: decreasing sysctl_perf_event_mlock between two consecutive mmap()s of a perf ring buffer may lead to an integer underflow in locked memory accounting. This may lead to the following undesired behavior: <an example of bad behavior as opposed to expected behavior>.
2. Fix description: address this by adjusting the accounting logic to take into account the possibility that the amount of already locked memory may exceed the current limit.
> Fixes: c4b75479741c ("perf/core: Make the mlock accounting simple again") > Signed-off-by: Song Liu <songliubraving@fb.com> > Suggested-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> > Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> > Cc: Arnaldo Carvalho de Melo <acme@redhat.com> > Cc: Jiri Olsa <jolsa@kernel.org> > Cc: Peter Zijlstra <peterz@infradead.org> > --- > kernel/events/core.c | 13 ++++++++++++- > 1 file changed, 12 insertions(+), 1 deletion(-) > > diff --git a/kernel/events/core.c b/kernel/events/core.c > index 2173c23c25b4..d25f2de45996 100644 > --- a/kernel/events/core.c > +++ b/kernel/events/core.c > @@ -5916,8 +5916,19 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma) > */ > user_lock_limit *= num_online_cpus(); > > - user_locked = atomic_long_read(&user->locked_vm) + user_extra; > + user_locked = atomic_long_read(&user->locked_vm); > > + /* > + * sysctl_perf_event_mlock and user->locked_vm can change value > + * independently. so we can't guarantee: > + * user->locked_vm <= user_lock_limit
"sysctl_perf_event_mlock may have changed, so that user->locked_vm > user_lock_limit".
> + * > + * Adjust user_locked to be <= user_lock_limit so we can calcualte > + * correct extra and user_extra.
This comment is also verbalizing the C code that follows. I don't think it's necessary.
> + */ > + user_locked = min_t(unsigned long, user_locked, user_lock_limit);
A matter of preference, but to me the "if (user_locked >= user_lock_limit)" is easier to read.
> + > + user_locked += user_extra; > if (user_locked > user_lock_limit) { > /* > * charge locked_vm until it hits user_lock_limit;
Thanks, -- Alex
| |