lkml.org 
[lkml]   [2019]   [Mar]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 4.19 093/280] mm: handle lru_add_drain_all for UP properly
    Date
    4.19-stable review patch.  If anyone has any objections, please let me know.

    ------------------

    [ Upstream commit 6ea183d60c469560e7b08a83c9804299e84ec9eb ]

    Since for_each_cpu(cpu, mask) added by commit 2d3854a37e8b767a
    ("cpumask: introduce new API, without changing anything") did not
    evaluate the mask argument if NR_CPUS == 1 due to CONFIG_SMP=n,
    lru_add_drain_all() is hitting WARN_ON() at __flush_work() added by
    commit 4d43d395fed12463 ("workqueue: Try to catch flush_work() without
    INIT_WORK().") by unconditionally calling flush_work() [1].

    Workaround this issue by using CONFIG_SMP=n specific lru_add_drain_all
    implementation. There is no real need to defer the implementation to
    the workqueue as the draining is going to happen on the local cpu. So
    alias lru_add_drain_all to lru_add_drain which does all the necessary
    work.

    [akpm@linux-foundation.org: fix various build warnings]
    [1] https://lkml.kernel.org/r/18a30387-6aa5-6123-e67c-57579ecc3f38@roeck-us.net
    Link: http://lkml.kernel.org/r/20190213124334.GH4525@dhcp22.suse.cz
    Signed-off-by: Michal Hocko <mhocko@suse.com>
    Reported-by: Guenter Roeck <linux@roeck-us.net>
    Debugged-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
    Cc: Tejun Heo <tj@kernel.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Sasha Levin <sashal@kernel.org>
    ---
    mm/swap.c | 17 ++++++++++-------
    1 file changed, 10 insertions(+), 7 deletions(-)

    diff --git a/mm/swap.c b/mm/swap.c
    index 26fc9b5f1b6c..a3fc028e338e 100644
    --- a/mm/swap.c
    +++ b/mm/swap.c
    @@ -321,11 +321,6 @@ static inline void activate_page_drain(int cpu)
    {
    }

    -static bool need_activate_page_drain(int cpu)
    -{
    - return false;
    -}
    -
    void activate_page(struct page *page)
    {
    struct zone *zone = page_zone(page);
    @@ -654,13 +649,15 @@ void lru_add_drain(void)
    put_cpu();
    }

    +#ifdef CONFIG_SMP
    +
    +static DEFINE_PER_CPU(struct work_struct, lru_add_drain_work);
    +
    static void lru_add_drain_per_cpu(struct work_struct *dummy)
    {
    lru_add_drain();
    }

    -static DEFINE_PER_CPU(struct work_struct, lru_add_drain_work);
    -
    /*
    * Doesn't need any cpu hotplug locking because we do rely on per-cpu
    * kworkers being shut down before our page_alloc_cpu_dead callback is
    @@ -703,6 +700,12 @@ void lru_add_drain_all(void)

    mutex_unlock(&lock);
    }
    +#else
    +void lru_add_drain_all(void)
    +{
    + lru_add_drain();
    +}
    +#endif

    /**
    * release_pages - batched put_page()
    --
    2.19.1


    \
     
     \ /
      Last update: 2019-03-22 13:01    [W:4.030 / U:0.084 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site