lkml.org 
[lkml]   [2017]   [Jul]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2] mm/page_alloc: Wait for oom_lock before retrying.
On Mon 17-07-17 22:50:47, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > On Sun 16-07-17 19:59:51, Tetsuo Handa wrote:
> > > Since the whole memory reclaim path has never been designed to handle the
> > > scheduling priority inversions, those locations which are assuming that
> > > execution of some code path shall eventually complete without using
> > > synchronization mechanisms can get stuck (livelock) due to scheduling
> > > priority inversions, for CPU time is not guaranteed to be yielded to some
> > > thread doing such code path.
> > >
> > > mutex_trylock() in __alloc_pages_may_oom() (waiting for oom_lock) and
> > > schedule_timeout_killable(1) in out_of_memory() (already held oom_lock) is
> > > one of such locations, and it was demonstrated using artificial stressing
> > > that the system gets stuck effectively forever because SCHED_IDLE priority
> > > thread is unable to resume execution at schedule_timeout_killable(1) if
> > > a lot of !SCHED_IDLE priority threads are wasting CPU time [1].
> >
> > I do not understand this. All the contending tasks will go and sleep for
> > 1s. How can they preempt the lock holder?
>
> Not 1s. It sleeps for only 1 jiffies, which is 1ms if CONFIG_HZ=1000.

Right, for some reason I have seen HZ. My bad!
--
Michal Hocko
SUSE Labs

\
 
 \ /
  Last update: 2017-07-17 17:16    [W:0.131 / U:0.140 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site