lkml.org 
[lkml]   [2017]   [Jun]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
On Fri 2017-05-26 14:12:28, Luis R. Rodriguez wrote:
> If we reach the limit of modprobe_limit threads running the next
> request_module() call will fail. The original reason for adding
> a kill was to do away with possible issues with in old circumstances
> which would create a recursive series of request_module() calls.
> We can do better than just be super aggressive and reject calls
> once we've reached the limit by simply making pending callers wait
> until the threshold has been reduced.
>
> The only difference is the clutch helps with avoiding making
> request_module() requests fatal more often. With x86_64 qemu,
> with 4 cores, 4 GiB of RAM it takes the following run time to
> run both tests:
>
> time ./kmod.sh -t 0008
> real 0m12.364s
> user 0m0.704s
> sys 0m5.373s
>
> time ./kmod.sh -t 0009
> real 0m47.638s
> user 0m1.033s
> sys 0m5.425s
>
> Signed-off-by: Luis R. Rodriguez <mcgrof@kernel.org>
> ---
> kernel/kmod.c | 16 +++++++---------
> tools/testing/selftests/kmod/kmod.sh | 24 ++----------------------
> 2 files changed, 9 insertions(+), 31 deletions(-)
>
> diff --git a/kernel/kmod.c b/kernel/kmod.c
> index 3e346c700e80..46b12fed6fd0 100644
> --- a/kernel/kmod.c
> +++ b/kernel/kmod.c
> @@ -163,14 +163,11 @@ int __request_module(bool wait, const char *fmt, ...)
> return ret;
>
> if (atomic_dec_if_positive(&kmod_concurrent_max) < 0) {
> - /* We may be blaming an innocent here, but unlikely */
> - if (kmod_loop_msg < 5) {
> - printk(KERN_ERR
> - "request_module: runaway loop modprobe %s\n",
> - module_name);
> - kmod_loop_msg++;
> - }
> - return -ENOMEM;
> + pr_warn_ratelimited("request_module: kmod_concurrent_max (%u) close to 0 (max_modprobes: %u), for module %s\n, throttling...",
> + atomic_read(&kmod_concurrent_max),
> + 50, module_name);

It is weird to pass the constant '50' via %s. Also a #define should be
used to keep it in sync with the kmod_concurrent_max initialization.


> + wait_event_interruptible(kmod_wq,
> + atomic_dec_if_positive(&kmod_concurrent_max) >= 0);
> }
>
> trace_module_request(module_name, wait, _RET_IP_);
> @@ -178,6 +175,7 @@ int __request_module(bool wait, const char *fmt, ...)
> ret = call_modprobe(module_name, wait ? UMH_WAIT_PROC : UMH_WAIT_EXEC);
>
> atomic_inc(&kmod_concurrent_max);
> + wake_up_all(&kmod_wq);

Does it make sense to wake up all waiters when we released the resource
only for one? IMHO, a simple wake_up() should be here.

I am sorry for the late review. The month ran really fast.

Best Regards,
Petr

\
 
 \ /
  Last update: 2017-06-22 20:04    [W:0.253 / U:0.588 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site