lkml.org 
[lkml]   [2024]   [Apr]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH v2 09/10] perf/qcom_l2: Avoid placing cpumask var on stack
    On Wed, Apr 03, 2024 at 08:51:08PM +0800, Dawei Li wrote:
    > For CONFIG_CPUMASK_OFFSTACK=y kernel, explicit allocation of cpumask
    > variable on stack is not recommended since it can cause potential stack
    > overflow.
    >
    > Instead, kernel code should always use *cpumask_var API(s) to allocate
    > cpumask var in config-neutral way, leaving allocation strategy to
    > CONFIG_CPUMASK_OFFSTACK.
    >
    > But dynamic allocation in cpuhp's teardown callback is somewhat problematic
    > for if allocation fails(which is unlikely but still possible):
    > - If -ENOMEM is returned to caller, kernel crashes for non-bringup
    > teardown;
    > - If callback pretends nothing happened and returns 0 to caller, it may
    > trap system into an in-consisitent/compromised state;
    >
    > Use newly-introduced cpumask_any_and_but() to address all issues above.
    > It eliminates usage of temporary cpumask var in generic way, no matter how
    > the cpumask var is allocated.
    >
    > Suggested-by: Mark Rutland <mark.rutland@arm.com>
    > Signed-off-by: Dawei Li <dawei.li@shingroup.cn>

    The logic looks good to me, but I'd like the commit message updated the same as
    per my comment on patch 2.

    With that commit message:

    Reviewed-by: Mark Rutland <mark.rutland@arm.com>

    Mark.

    > ---
    > drivers/perf/qcom_l2_pmu.c | 8 +++-----
    > 1 file changed, 3 insertions(+), 5 deletions(-)
    >
    > diff --git a/drivers/perf/qcom_l2_pmu.c b/drivers/perf/qcom_l2_pmu.c
    > index 148df5ae8ef8..b5a44dc1dc3a 100644
    > --- a/drivers/perf/qcom_l2_pmu.c
    > +++ b/drivers/perf/qcom_l2_pmu.c
    > @@ -801,9 +801,8 @@ static int l2cache_pmu_online_cpu(unsigned int cpu, struct hlist_node *node)
    >
    > static int l2cache_pmu_offline_cpu(unsigned int cpu, struct hlist_node *node)
    > {
    > - struct cluster_pmu *cluster;
    > struct l2cache_pmu *l2cache_pmu;
    > - cpumask_t cluster_online_cpus;
    > + struct cluster_pmu *cluster;
    > unsigned int target;
    >
    > l2cache_pmu = hlist_entry_safe(node, struct l2cache_pmu, node);
    > @@ -820,9 +819,8 @@ static int l2cache_pmu_offline_cpu(unsigned int cpu, struct hlist_node *node)
    > cluster->on_cpu = -1;
    >
    > /* Any other CPU for this cluster which is still online */
    > - cpumask_and(&cluster_online_cpus, &cluster->cluster_cpus,
    > - cpu_online_mask);
    > - target = cpumask_any_but(&cluster_online_cpus, cpu);
    > + target = cpumask_any_and_but(&cluster->cluster_cpus,
    > + cpu_online_mask, cpu);
    > if (target >= nr_cpu_ids) {
    > disable_irq(cluster->irq);
    > return 0;
    > --
    > 2.27.0
    >

    \
     
     \ /
      Last update: 2024-05-27 16:22    [W:7.755 / U:0.036 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site