lkml.org 
[lkml]   [2018]   [Sep]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC 00/60] Coscheduling for Linux
On 12.09.2018 [21:34:14 +0200], Jan H. Schönherr wrote:
> On 09/12/2018 02:24 AM, Nishanth Aravamudan wrote:
> > [ I am not subscribed to LKML, please keep me CC'd on replies ]
> >
> > I tried a simple test with several VMs (in my initial test, I have 48
> > idle 1-cpu 512-mb VMs and 2 idle 2-cpu, 2-gb VMs) using libvirt, none
> > pinned to any CPUs. When I tried to set all of the top-level libvirt cpu
> > cgroups' to be co-scheduled (/bin/echo 1 >
> > /sys/fs/cgroup/cpu/machine/<VM-x>.libvirt-qemu/cpu.scheduled), the
> > machine hangs. This is using cosched_max_level=1.
> >
> > There are several moving parts there, so I tried narrowing it down, by
> > only coscheduling one VM, and thing seemed fine:
> >
> > /sys/fs/cgroup/cpu/machine/<VM-1>.libvirt-qemu# echo 1 > cpu.scheduled
> > /sys/fs/cgroup/cpu/machine/<VM-1>.libvirt-qemu# cat cpu.scheduled
> > 1
> >
> > One thing that is not entirely obvious to me (but might be completely
> > intentional) is that since by default the top-level libvirt cpu cgroups
> > are empty:
> >
> > /sys/fs/cgroup/cpu/machine/<VM-1>.libvirt-qemu# cat tasks
> >
> > the result of this should be a no-op, right? [This becomes relevant
> > below] Specifically, all of the threads of qemu are in sub-cgroups,
> > which do not indicate they are co-scheduling:
> >
> > /sys/fs/cgroup/cpu/machine/<VM-1>.libvirt-qemu# cat emulator/cpu.scheduled
> > 0
> > /sys/fs/cgroup/cpu/machine/<VM-1>.libvirt-qemu# cat vcpu0/cpu.scheduled
> > 0
> >
>
> This setup *should* work. It should be possible to set cpu.scheduled
> independent of the cpu.scheduled values of parent and child task groups.
> Any intermediate regular task group (i.e. cpu.scheduled==0) will still
> contribute the group fairness aspects.

Ah I see, that makes sense, thank you.

> That said, I see a hang, too. It seems to happen, when there is a
> cpu.scheduled!=0 group that is not a direct child of the root task group.
> You seem to have "/sys/fs/cgroup/cpu/machine" as an intermediate group.
> (The case ==0 within !=0 within the root task group works for me.)
>
> I'm going to dive into the code.
>
> [...]
> > I am happy to do any further debugging I can do, or try patches on top
> > of those posted on the mailing list.
>
> If you're willing, you can try to get rid of the intermediate "machine"
> cgroup in your setup for the moment. This might tell us, whether we're
> looking at the same issue.

Yep I will do this now. Note that if I just try to set machine's
cpu.scheduled to 1, with no other changes (not even changing any child
cgroup's cpu.scheduled yet), I get the following trace:

[16052.164259] ------------[ cut here ]------------
[16052.168973] rq->clock_update_flags < RQCF_ACT_SKIP
[16052.168991] WARNING: CPU: 59 PID: 59533 at kernel/sched/sched.h:1303 assert_clock_updated.isra.82.part.83+0x15/0x18
[16052.184424] Modules linked in: act_police cls_basic ebtable_filter ebtables ip6table_filter iptable_filter nbd ip6table_raw ip6_tables xt_CT iptable_raw ip_tables s
[16052.255653] xxhash raid10 raid0 multipath linear raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq ses libcrc32c raid1 enclosure scsi
[16052.276029] CPU: 59 PID: 59533 Comm: bash Tainted: G O 4.19.0-rc2-amazon-cosched+ #1
[16052.291142] Hardware name: Dell Inc. PowerEdge R640/0W23H8, BIOS 1.4.9 06/29/2018
[16052.298728] RIP: 0010:assert_clock_updated.isra.82.part.83+0x15/0x18
[16052.305166] Code: 0f 85 75 ff ff ff 48 83 c4 08 5b 5d 41 5c 41 5d 41 5e 41 5f c3 48 c7 c7 28 30 eb 94 31 c0 c6 05 47 18 27 01 01 e8 f4 df fb ff <0f> 0b c3 48 8b 970
[16052.324050] RSP: 0018:ffff9cada610bca8 EFLAGS: 00010096
[16052.329361] RAX: 0000000000000026 RBX: ffff8f06d65bae00 RCX: 0000000000000006
[16052.336580] RDX: 0000000000000000 RSI: 0000000000000096 RDI: ffff8f1edf756620
[16052.343799] RBP: ffff8f06e0462e00 R08: 000000000000079b R09: ffff9cada610bc48
[16052.351018] R10: 0000000000000000 R11: 0000000000000000 R12: ffff8f06e0462e80
[16052.358237] R13: 0000000000000001 R14: ffff8f06e0462e00 R15: 0000000000000001
[16052.365458] FS: 00007ff07ab02740(0000) GS:ffff8f1edf740000(0000) knlGS:0000000000000000
[16052.373647] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[16052.379480] CR2: 00007ff07ab139d8 CR3: 0000002ca2aea002 CR4: 00000000007626e0
[16052.386698] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[16052.393917] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[16052.401137] PKRU: 55555554
[16052.403927] Call Trace:
[16052.406460] update_curr+0x19f/0x1c0
[16052.410116] dequeue_entity+0x21/0x8c0
[16052.413950] ? terminate_walk+0x55/0xb0
[16052.417871] dequeue_entity_fair+0x46/0x1c0
[16052.422136] sdrq_update_root+0x35d/0x480
[16052.426227] cosched_set_scheduled+0x80/0x1c0
[16052.430675] cpu_scheduled_write_u64+0x26/0x30
[16052.435209] cgroup_file_write+0xe3/0x140
[16052.439305] kernfs_fop_write+0x110/0x190
[16052.443397] __vfs_write+0x26/0x170
[16052.446974] ? __audit_syscall_entry+0x101/0x130
[16052.451674] ? _cond_resched+0x15/0x30
[16052.455509] ? __sb_start_write+0x41/0x80
[16052.459600] vfs_write+0xad/0x1a0
[16052.462997] ksys_write+0x42/0x90
[16052.466397] do_syscall_64+0x55/0x110
[16052.470152] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[16052.475286] RIP: 0033:0x7ff07a1e93c0
[16052.478943] Code: 73 01 c3 48 8b 0d c8 2a 2d 00 f7 d8 64 89 01 48 83 c8 ff c3 66 0f 1f 44 00 00 83 3d bd 8c 2d 00 00 75 10 b8 01 00 00 00 0f 05 <48> 3d 01 f0 ff ff4
[16052.497827] RSP: 002b:00007ffc73e335b8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
[16052.505498] RAX: ffffffffffffffda RBX: 0000000000000002 RCX: 00007ff07a1e93c0
[16052.512715] RDX: 0000000000000002 RSI: 00000000023a0408 RDI: 0000000000000001
[16052.519936] RBP: 00000000023a0408 R08: 000000000000000a R09: 00007ff07ab02740
[16052.527156] R10: 00007ff07a4bb6a0 R11: 0000000000000246 R12: 00007ff07a4bd400
[16052.534374] R13: 0000000000000002 R14: 0000000000000001 R15: 0000000000000000
[16052.541593] ---[ end trace b20c73e6c2bec22c ]---

I'll reboot and move some cgroups around :)

Thanks,
Nish

\
 
 \ /
  Last update: 2018-09-13 01:16    [W:0.561 / U:0.044 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site