lkml.org 
[lkml]   [2023]   [Mar]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [RFC PATCH] sched/fair: Make tg->load_avg per node
    On 2023-03-29 at 14:36:44 +0200, Dietmar Eggemann wrote:
    > On 28/03/2023 14:56, Aaron Lu wrote:
    > > Hi Dietmar,
    > >
    > > Thanks for taking a look.
    > >
    > > On Tue, Mar 28, 2023 at 02:09:39PM +0200, Dietmar Eggemann wrote:
    > >> On 27/03/2023 07:39, Aaron Lu wrote:
    >
    > [...]
    >
    > > Did you test with a v6.3-rc based kernel?
    > > I encountered another problem on those kernels and had to temporarily use
    > > a v6.2 based kernel, maybe you have to do the same:
    > > https://lore.kernel.org/lkml/20230327080502.GA570847@ziqianlu-desk2/
    >
    > No, I'm also on v6.2.
    >
    > >> Is your postgres/sysbench running in a cgroup with cpu controller
    > >> attached? Mine isn't.
    > >
    > > Yes, I had postgres and sysbench running in the same cgroup with cpu
    > > controller enabled. docker created the cgroup directory under
    > > /sys/fs/cgroup/system.slice/docker-XXX and cgroup.controllers has cpu
    > > there.
    >
    > I'm running postgresql service directly on the machine. I boot now with
    > 'cgroup_no_v1=all systemd.unified_cgroup_hierarchy=1' so I can add the
    > cpu controller to:
    >
    > system.slice/system-postgresql.slice/postgresql@11-main.service
    >
    > where the 96 postgres threads run and to
    >
    > user.slice/user-1005.slice/session-4.scope
    >
    > where the 96 sysbench threads run.
    >
    > Checked with systemd-cgls and `cat /sys/kernel/debug/sched/debug` that
    > those threads are really running there.
    >
    > Still not seeing `update_load_avg` or `update_cfs_group` in perf report,
    > only some very low values for `update_blocked_averages`.
    >
    > Also added CFS BW throttling to both cgroups. No change.
    >
    > Then I moved session-4.scope's shell into `postgresql@11-main.service`
    > so that `postgres` and `sysbench` threads run in the same cgroup.
    >
    > Didn't change much.
    >
    > >> Maybe I'm doing something else differently?
    > >
    > > Maybe, you didn't mention how you started postgres, if you start it from
    > > the same session as sysbench and if autogroup is enabled, then all those
    > > tasks would be in the same autogroup taskgroup then it should have the
    > > same effect as my setup.
    >
    > This should be now close to my setup running `postgres` and `sysbench`
    > in `postgresql@11-main.service`.
    >
    > > Anyway, you can try the following steps to see if you can reproduce this
    > > problem on your Arm64 server:
    > >
    > > 1 docker pull postgres
    > > 2 sudo docker run --rm --name postgres-instance -e POSTGRES_PASSWORD=mypass -e POSTGRES_USER=sbtest -d postgres -c shared_buffers=80MB -c max_connections=250
    > > 3 go inside the container
    > > sudo docker exec -it $the_just_started_container_id bash
    > > 4 install sysbench inside container
    > > apt update and apt install sysbench
    > > 5 prepare
    > > root@container:/# sysbench --db-driver=pgsql --pgsql-user=sbtest --pgsql_password=mypass --pgsql-db=sbtest --pgsql-port=5432 --tables=16 --table-size=10000 --threads=224 --time=60 --report-interval=2 /usr/share/sysbench/oltp_read_only.lua prepare
    > > 6 run
    > > root@container:/# sysbench --db-driver=pgsql --pgsql-user=sbtest --pgsql_password=mypass --pgsql-db=sbtest --pgsql-port=5432 --tables=16 --table-size=10000 --threads=224 --time=60 --report-interval=2 /usr/share/sysbench/oltp_read_only.lua run
    >
    > I would have to find time to learn how to set up docker on my machine
    > ... But I use very similar values for the setup and sysbench test.
    >
    > > Note that I used 224 threads where this problem is visible. I also tried
    > > 96 and update_cfs_group() and update_load_avg() cost about 1% cycles then.
    >
    > True, I was hopping to see at least the 1% ;-)
    According to Aaron's description, the relatively high cost of update_load_avg() was
    caused by cross-node access. If the task group is allocated on node0, but some tasks
    in this task group are load balanced to node1, the issue could be triggered
    easier? echo 0 > /sys/kernel/debug/sched/numa_balancing

    thanks,
    Chenyu

    \
     
     \ /
      Last update: 2023-03-29 16:57    [W:2.804 / U:0.060 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site