lkml.org 
[lkml]   [2015]   [Sep]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 03/10] mm, page_alloc: Remove unnecessary taking of a seqlock when cpusets are disabled
    Date
    There is a seqcounter that protects against spurious allocation failures
    when a task is changing the allowed nodes in a cpuset. There is no need
    to check the seqcounter until a cpuset exists.

    Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
    Acked-by: Christoph Lameter <cl@linux.com>
    Acked-by: David Rientjes <rientjes@google.com>
    Acked-by: Vlastimil Babka <vbabka@suse.cz>
    Acked-by: Michal Hocko <mhocko@suse.com>
    ---
    include/linux/cpuset.h | 6 ++++++
    1 file changed, 6 insertions(+)

    diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
    index 1b357997cac5..6eb27cb480b7 100644
    --- a/include/linux/cpuset.h
    +++ b/include/linux/cpuset.h
    @@ -104,6 +104,9 @@ extern void cpuset_print_task_mems_allowed(struct task_struct *p);
    */
    static inline unsigned int read_mems_allowed_begin(void)
    {
    + if (!cpusets_enabled())
    + return 0;
    +
    return read_seqcount_begin(&current->mems_allowed_seq);
    }

    @@ -115,6 +118,9 @@ static inline unsigned int read_mems_allowed_begin(void)
    */
    static inline bool read_mems_allowed_retry(unsigned int seq)
    {
    + if (!cpusets_enabled())
    + return false;
    +
    return read_seqcount_retry(&current->mems_allowed_seq, seq);
    }

    --
    2.4.6


    \
     
     \ /
      Last update: 2015-09-21 13:21    [W:3.152 / U:0.988 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site