lkml.org 
[lkml]   [2017]   [Jul]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v2] mm/page_alloc: Wait for oom_lock before retrying.
From
Date
Michal Hocko wrote:
> On Sun 16-07-17 19:59:51, Tetsuo Handa wrote:
> > Since the whole memory reclaim path has never been designed to handle the
> > scheduling priority inversions, those locations which are assuming that
> > execution of some code path shall eventually complete without using
> > synchronization mechanisms can get stuck (livelock) due to scheduling
> > priority inversions, for CPU time is not guaranteed to be yielded to some
> > thread doing such code path.
> >
> > mutex_trylock() in __alloc_pages_may_oom() (waiting for oom_lock) and
> > schedule_timeout_killable(1) in out_of_memory() (already held oom_lock) is
> > one of such locations, and it was demonstrated using artificial stressing
> > that the system gets stuck effectively forever because SCHED_IDLE priority
> > thread is unable to resume execution at schedule_timeout_killable(1) if
> > a lot of !SCHED_IDLE priority threads are wasting CPU time [1].
>
> I do not understand this. All the contending tasks will go and sleep for
> 1s. How can they preempt the lock holder?

Not 1s. It sleeps for only 1 jiffies, which is 1ms if CONFIG_HZ=1000.

And 1ms may not be long enough to allow the owner of oom_lock when there are
many threads doing the same thing. I demonstrated that SCHED_IDLE oom_lock
owner is completely defeated by a bunch of !SCHED_IDLE contending threads.

\
 
 \ /
  Last update: 2017-07-17 15:51    [W:0.074 / U:0.620 seconds]
©2003-2017 Jasper Spaans. hosted at Digital OceanAdvertise on this site