lkml.org 
[lkml]   [2020]   [Mar]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v3 1/6] sched/rt: cpupri_find: Implement fallback mechanism for !fit case
On 03/04/20 20:01, Qais Yousef wrote:
> On 03/04/20 13:54, Steven Rostedt wrote:
> > > If we fix 1, then assuming found == -1 for all level, we'll still have the
> > > problem that the mask is stale.
> > >
> > > We can do a full scan again as Tao was suggestion, the 2nd one without any
> > > fitness check that is. But isn't this expensive?
> >
> > I was hoping to try to avoid that, but it's not that expensive and will
> > probably seldom happen. Perhaps we should run some test cases and trace the
> > results to see how often that can happen.
> >
> > >
> > > We risk the mask being stale anyway directly after selecting it. Or a priority
> > > level might become the lowest level just after we dismissed it.
> >
> > Sure, but that's still a better effort.
>
> Okay let me run some quick tests and send an updated series if it doesn't
> return something suspicious.

I ran my 6 tasks test which spawns 6 big tasks on 4 little + 2 big system.
I verified that the 2nd fallback search happens during this scenario.

The duration of the run was 10 seconds.

Using tace-cmd record --profile -l select_task_rq_rt I got the following
results for

A) Falling back to searching the last unfit_priority_idx only
B) Falling back to a full search with fitness check disabled

A)
Event: func: select_task_rq_rt() (600) Total: 2486002 Avg: 4143 Max: 11400(ts:1084.358268) Min:1(ts:1080.258259)
Event: func: select_task_rq_rt() (2) Total: 11900 Avg: 5950 Max: 8680(ts:1079.659145) Min:3220(ts:1079.659288)
Event: func: select_task_rq_rt() (4) Total: 13080 Avg: 3270 Max: 3580(ts:1079.659815) Min:3080(ts:1079.659475)
Event: func: select_task_rq_rt() (3) Total: 15240 Avg: 5080 Max: 6260(ts:1079.659222) Min:4180(ts:1079.659591)
Event: func: select_task_rq_rt() (1) Total: 5500 Avg: 5500 Max: 5500(ts:1079.659344) Min:5500(ts:1079.659344)
Event: func: select_task_rq_rt() (2) Total: 8440 Avg: 4220 Max: 4600(ts:1079.659380) Min:3840(ts:1079.659786)
Event: func: select_task_rq_rt() (3) Total: 22920 Avg: 7640 Max: 11100(ts:1079.659672) Min:3620(ts:1079.659891)

B)
Event: func: select_task_rq_rt() (600) Total: 2468873 Avg: 4114 Max: 12580(ts:510.268625) Min:1(ts:516.868611)
Event: func: select_task_rq_rt() (4) Total: 19260 Avg: 4815 Max: 7920(ts:507.369259) Min:3340(ts:507.369669)
Event: func: select_task_rq_rt() (1) Total: 4700 Avg: 4700 Max: 4700(ts:507.369583) Min:4700(ts:507.369583)
Event: func: select_task_rq_rt() (3) Total: 7640 Avg: 2546 Max: 3320(ts:507.369403) Min:1300(ts:507.369428)
Event: func: select_task_rq_rt() (2) Total: 8760 Avg: 4380 Max: 5100(ts:507.369497) Min:3660(ts:507.369338)
Event: func: select_task_rq_rt() (1) Total: 3240 Avg: 3240 Max: 3240(ts:507.369450) Min:3240(ts:507.369450)

So for both the max seems to be ~12us, which I think is fine.

In the profile report I got something like this which I didn't know how to
interpret. The max value is high; ~72ms.

Did I use and look at the profile output correctly? I still have the trace.dat
for the 2 runs.


| + 0x266cb00000000
| | 1% (1) time:72836320 max:72836320(ts:1079.785460) min:72836320(ts:1079.785460) avg:72836320
| | 0x309000a
| | select_task_rq_rt (0xffff800010152c74)
| | 0x0
| | 0xe885dcc56c
| | 0xe885dcdb24
| | 0x2788400000000
| | 0x1090047




...



+ select_task_rq_rt (0xffff800010152c74)
| 88% (1) time:72100 max:72100(ts:507.369339) min:72100(ts:507.369339) avg:72100
| 0x24e4b00000000
| 0x309000a
| select_task_rq_rt (0xffff800010152c74)
| 0x0
| 0x53be002898
| 0x53be003b6c
| 0x26bc400000000
|


Thanks

--
Qais Yousef

\
 
 \ /
  Last update: 2020-03-05 13:44    [W:0.100 / U:17.912 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site