lkml.org 
[lkml]   [2017]   [May]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH 5/6] kmod: preempt on kmod_umh_threads_get()
    On Wed, May 24, 2017 at 05:45:37PM -0700, Dmitry Torokhov wrote:
    > On Thu, May 25, 2017 at 02:14:52AM +0200, Luis R. Rodriguez wrote:
    > > On Fri, May 19, 2017 at 03:27:12PM -0700, Dmitry Torokhov wrote:
    > > > On Thu, May 18, 2017 at 08:24:43PM -0700, Luis R. Rodriguez wrote:
    > > > > In theory it is possible multiple concurrent threads will try to
    > > > > kmod_umh_threads_get() and as such atomic_inc(&kmod_concurrent) at
    > > > > the same time, therefore enabling a small time during which we've
    > > > > bumped kmod_concurrent but have not really enabled work. By using
    > > > > preemption we mitigate this a bit.
    > > > >
    > > > > Preemption is not needed when we kmod_umh_threads_put().
    > > > >
    > > > > Signed-off-by: Luis R. Rodriguez <mcgrof@kernel.org>
    > > > > ---
    > > > > kernel/kmod.c | 24 ++++++++++++++++++++++--
    > > > > 1 file changed, 22 insertions(+), 2 deletions(-)
    > > > >
    > > > > diff --git a/kernel/kmod.c b/kernel/kmod.c
    > > > > index 563600fc9bb1..7ea11dbc7564 100644
    > > > > --- a/kernel/kmod.c
    > > > > +++ b/kernel/kmod.c
    > > > > @@ -113,15 +113,35 @@ static int call_modprobe(char *module_name, int wait)
    > > > >
    > > > > static int kmod_umh_threads_get(void)
    > > > > {
    > > > > + int ret = 0;
    > > > > +
    > > > > + /*
    > > > > + * Disabling preemption makes sure that we are not rescheduled here
    > > > > + *
    > > > > + * Also preemption helps kmod_concurrent is not increased by mistake
    > > > > + * for too long given in theory two concurrent threads could race on
    > > > > + * atomic_inc() before we atomic_read() -- we know that's possible
    > > > > + * and but we don't care, this is not used for object accounting and
    > > > > + * is just a subjective threshold. The alternative is a lock.
    > > > > + */
    > > > > + preempt_disable();
    > > > > atomic_inc(&kmod_concurrent);
    > > > > if (atomic_read(&kmod_concurrent) <= max_modprobes)
    > > >
    > > > That is very "fancy" way to basically say:
    > > >
    > > > if (atomic_inc_return(&kmod_concurrent) <= max_modprobes)
    > >
    > > Do you mean to combine the atomic_inc() and atomic_read() in one as you noted
    > > (as that is not a change in this patch), *or* that using a memory barrier here
    > > with atomic_inc_return() should suffice to address the same and avoid an
    > > explicit preemption enable / disable ?
    >
    > I am saying that atomic_inc_return() will avoid situation where you have
    > more than one threads incrementing the counter and believing that they
    > are [not] allowed to start modprobe.
    >
    > I have no idea why you think preempt_disable() would help here. It only
    > ensures that current thread will not be preempted between the point
    > where you update the counter and where you check the result. It does not
    > stop interrupts nor does it affect other threads that might be updating
    > the same counter.

    The preemption was inspired by __module_get() and try_module_get(), was that
    rather silly ?

    Luis

    \
     
     \ /
      Last update: 2017-05-25 03:00    [W:3.505 / U:1.176 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site