lkml.org 
[lkml]   [2023]   [Dec]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    SubjectRe: [PATCH v3 05/35] sched: add cpumask_find_and_set() and use it in __mm_cid_get()
    From
    On 2023-12-11 21:27, Yury Norov wrote:
    > __mm_cid_get() uses __mm_cid_try_get() helper to atomically acquire a
    > bit in mm cid mask. Now that we have atomic find_and_set_bit(), we can
    > easily extend it to cpumasks and use in the scheduler code.
    >
    > cpumask_find_and_set() considers cid mask as a volatile region of memory,
    > as it actually is in this case. So, if it's changed while search is in
    > progress, KCSAN wouldn't fire warning on it.
    >
    > CC: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
    > CC: Peter Zijlstra <peterz@infradead.org>
    > Signed-off-by: Yury Norov <yury.norov@gmail.com>

    Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>

    > ---
    > include/linux/cpumask.h | 12 ++++++++++++
    > kernel/sched/sched.h | 14 +++++---------
    > 2 files changed, 17 insertions(+), 9 deletions(-)
    >
    > diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
    > index cfb545841a2c..c2acced8be4e 100644
    > --- a/include/linux/cpumask.h
    > +++ b/include/linux/cpumask.h
    > @@ -271,6 +271,18 @@ unsigned int cpumask_next_and(int n, const struct cpumask *src1p,
    > small_cpumask_bits, n + 1);
    > }
    >
    > +/**
    > + * cpumask_find_and_set - find the first unset cpu in a cpumask and
    > + * set it atomically
    > + * @srcp: the cpumask pointer
    > + *
    > + * Return: >= nr_cpu_ids if nothing is found.
    > + */
    > +static inline unsigned int cpumask_find_and_set(volatile struct cpumask *srcp)
    > +{
    > + return find_and_set_bit(cpumask_bits(srcp), small_cpumask_bits);
    > +}
    > +
    > /**
    > * for_each_cpu - iterate over every cpu in a mask
    > * @cpu: the (optionally unsigned) integer iterator
    > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
    > index 2e5a95486a42..2ce9112de89b 100644
    > --- a/kernel/sched/sched.h
    > +++ b/kernel/sched/sched.h
    > @@ -3347,23 +3347,19 @@ static inline void mm_cid_put(struct mm_struct *mm)
    >
    > static inline int __mm_cid_try_get(struct mm_struct *mm)
    > {
    > - struct cpumask *cpumask;
    > - int cid;
    > + struct cpumask *cpumask = mm_cidmask(mm);
    > + int cid = nr_cpu_ids;
    >
    > - cpumask = mm_cidmask(mm);
    > /*
    > * Retry finding first zero bit if the mask is temporarily
    > * filled. This only happens during concurrent remote-clear
    > * which owns a cid without holding a rq lock.
    > */
    > - for (;;) {
    > - cid = cpumask_first_zero(cpumask);
    > - if (cid < nr_cpu_ids)
    > - break;
    > + while (cid >= nr_cpu_ids) {
    > + cid = cpumask_find_and_set(cpumask);
    > cpu_relax();
    > }
    > - if (cpumask_test_and_set_cpu(cid, cpumask))
    > - return -1;
    > +
    > return cid;
    > }
    >

    --
    Mathieu Desnoyers
    EfficiOS Inc.
    https://www.efficios.com

    \
     
     \ /
      Last update: 2023-12-12 15:15    [W:3.408 / U:0.016 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site