lkml.org 
[lkml]   [2010]   [Feb]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
On Fri, Jan 29, 2010 at 10:51 PM, Jason Wessel
<jason.wessel@windriver.com> wrote:
> Ingo Molnar wrote:
>> * Jason Wessel <jason.wessel@windriver.com> wrote:
>>
>>
>>> @@ -118,6 +125,14 @@ void softlockup_tick(void)
>>> =C2=A0 =C2=A0 =C2=A0}
>>>
>>> =C2=A0 =C2=A0 =C2=A0if (touch_ts =3D=3D 0) {
>>> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (unlikely(per_cpu(sof=
tlock_touch_sync, this_cpu))) {
>>> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0/*
>>> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 * If the time stamp was touched atomically
>>> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 * make sure the scheduler tick is up to date.
>>> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 */
>>> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0per_cpu(softlock_touch_sync, this_cpu) =3D false;
>>> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0sched_clock_tick();
>>> + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}
>>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0__touch_softlockup_=
watchdog();
>>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return;
>>>
>>
>> Shouldnt just all of touch_softlockup_watchdog() gain this new
>> sched_clock_tick() call, instead of doing this ugly flaggery? Or wou=
ld that
>> lock up or misbehave in other ways in some cases?
>>
>> That would also make the patch much simpler i guess, as we'd only ha=
ve the
>> chunk above.
>>
>
> We have already been down that road, and it breaks other cases.
>
> http://lkml.org/lkml/2009/7/28/204
>
> Specifically the test case of:
>
> echo 3 > /proc/sys/kernel/softlockup_thresh
>
> And then some kernel code in a thread like:
> =C2=A0 =C2=A0 =C2=A0 =C2=A0local_irq_disable();
> =C2=A0 =C2=A0 =C2=A0 =C2=A0printk("Disable local irq for 11 seconds\n=
");
> =C2=A0 =C2=A0 =C2=A0 =C2=A0mdelay(11000);
> =C2=A0 =C2=A0 =C2=A0 =C2=A0local_irq_enable();

Hi Jason,

Maybe this problem was fixed by
commit baf48f6577e581a9adb8fe849dc80e24b21d171d - "softlock: fix false
panic which can occur if softlockup_thresh is reduced".

Thanks,
Dongdong

>
>
> I could consider calling sched_cpu_clock() before returning the kerne=
l
> to normal execution, but that didn't look very safe to call from the
> exception context, which is why it was delayed until the next time th=
e
> soft lockup code ran.
>
> Resuming from a long sleep is a ugly problem, so I am open to short t=
erm
> and long term suggestions, including a polling time API (obviously we
> would prefer not to go down that rat hole :-)
>
> Jason.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kerne=
l" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at =C2=A0http://vger.kernel.org/majordomo-info.ht=
ml
> Please read the FAQ at =C2=A0http://www.tux.org/lkml/
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel"=
in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2010-02-01 06:54    [W:0.036 / U:0.288 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site