lkml.org 
[lkml]   [2018]   [Dec]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: [PATCH] squashfs: enable __GFP_FS in ->readpage to prevent hang in mem alloc
    From
    Date
    Hi,

    On 2018/12/17 18:51, Tetsuo Handa wrote:
    > On 2018/12/17 18:33, Michal Hocko wrote:
    >> On Sun 16-12-18 19:51:57, Matthew Wilcox wrote:
    >> [...]
    >>> Ah, yes, that makes perfect sense. Thank you for the explanation.
    >>>
    >>> I wonder if the correct fix, however, is not to move the check for
    >>> GFP_NOFS in out_of_memory() down to below the check whether to kill
    >>> the current task. That would solve your problem, and I don't _think_
    >>> it would cause any new ones. Michal, you touched this code last, what
    >>> do you think?
    >>
    >> What do you mean exactly? Whether we kill a current task or something
    >> else doesn't change much on the fact that NOFS is a reclaim restricted
    >> context and we might kill too early. If the fs can do GFP_FS then it is
    >> obviously a better thing to do because FS metadata can be reclaimed as
    >> well and therefore there is potentially less memory pressure on
    >> application data.
    >>
    >
    > I interpreted "to move the check for GFP_NOFS in out_of_memory() down to
    > below the check whether to kill the current task" as
    >
    > @@ -1077,15 +1077,6 @@ bool out_of_memory(struct oom_control *oc)
    > }
    >
    > /*
    > - * The OOM killer does not compensate for IO-less reclaim.
    > - * pagefault_out_of_memory lost its gfp context so we have to
    > - * make sure exclude 0 mask - all other users should have at least
    > - * ___GFP_DIRECT_RECLAIM to get here.
    > - */
    > - if (oc->gfp_mask && !(oc->gfp_mask & __GFP_FS))
    > - return true;
    > -
    > - /*
    > * Check if there were limitations on the allocation (only relevant for
    > * NUMA and memcg) that may require different handling.
    > */
    > @@ -1104,6 +1095,19 @@ bool out_of_memory(struct oom_control *oc)
    > }
    >
    > select_bad_process(oc);
    > +
    > + /*
    > + * The OOM killer does not compensate for IO-less reclaim.
    > + * pagefault_out_of_memory lost its gfp context so we have to
    > + * make sure exclude 0 mask - all other users should have at least
    > + * ___GFP_DIRECT_RECLAIM to get here.
    > + */
    > + if ((oc->gfp_mask && !(oc->gfp_mask & __GFP_FS)) && oc->chosen &&
    > + oc->chosen != (void *)-1UL && oc->chosen != current) {
    > + put_task_struct(oc->chosen);
    > + return true;
    > + }
    > +
    > /* Found nothing?!?! */
    > if (!oc->chosen) {
    > dump_header(oc, NULL);
    >
    > which is prefixed by "the correct fix is not".
    >
    > Behaving like sysctl_oom_kill_allocating_task == 1 if __GFP_FS is not used
    > will not be the correct fix. But ...
    >
    > Hou Tao wrote:
    >> There is no need to disable __GFP_FS in ->readpage:
    >> * It's a read-only fs, so there will be no dirty/writeback page and
    >> there will be no deadlock against the caller's locked page
    >
    > is read-only filesystem sufficient for safe to use __GFP_FS?
    >
    > Isn't "whether it is safe to use __GFP_FS" depends on "whether fs locks
    > are held or not" rather than "whether fs has dirty/writeback page or not" ?
    >
    In my understanding (correct me if I am wrong), there are three ways through which
    reclamation will invoked fs related code and may cause dead-lock:

    (1) write-back dirty pages. Not possible for squashfs.
    (2) the reclamation of inodes & dentries. The current file is in-use, so it will be not
    reclaimed, and for other reclaimable inodes, squashfs_destroy_inode() will
    be invoked and it doesn't take any locks.
    (3) customized shrinker defined by fs. No customized shrinker in squashfs.

    So my point is that even a page lock is already held by squashfs_readpage() and
    reclamation invokes back to squashfs code, there will be no dead-lock, so it's
    safe to use __GFP_FS.

    Regards,
    Tao

    > .
    >

    \
     
     \ /
      Last update: 2018-12-18 07:06    [W:2.822 / U:0.060 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site