lkml.org 
[lkml]   [2021]   [Mar]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    SubjectRe: [PATCH v2] mm,hwpoison: return -EBUSY when page already poisoned
    Date
    On Tue, Mar 09, 2021 at 12:01:40PM -0800, Luck, Tony wrote:
    > On Tue, Mar 09, 2021 at 08:28:24AM +0000, HORIGUCHI NAOYA(堀口 直也) wrote:
    > > On Tue, Mar 09, 2021 at 02:35:34PM +0800, Aili Yao wrote:
    > > > When the page is already poisoned, another memory_failure() call in the
    > > > same page now return 0, meaning OK. For nested memory mce handling, this
    > > > behavior may lead to mce looping, Example:
    > > >
    > > > 1.When LCME is enabled, and there are two processes A && B running on
    > > > different core X && Y separately, which will access one same page, then
    > > > the page corrupted when process A access it, a MCE will be rasied to
    > > > core X and the error process is just underway.
    > > >
    > > > 2.Then B access the page and trigger another MCE to core Y, it will also
    > > > do error process, it will see TestSetPageHWPoison be true, and 0 is
    > > > returned.
    > > >
    > > > 3.The kill_me_maybe will check the return:
    > > >
    > > > 1244 static void kill_me_maybe(struct callback_head *cb)
    > > > 1245 {
    > > >
    > > > 1254 if (!memory_failure(p->mce_addr >> PAGE_SHIFT, flags) &&
    > > > 1255 !(p->mce_kflags & MCE_IN_KERNEL_COPYIN)) {
    > > > 1256 set_mce_nospec(p->mce_addr >> PAGE_SHIFT,
    > > > p->mce_whole_page);
    > > > 1257 sync_core();
    > > > 1258 return;
    > > > 1259 }
    > > >
    > > > 1267 }
    > > >
    > > > 4. The error process for B will end, and may nothing happened if
    > > > kill-early is not set, The process B will re-excute instruction and get
    > > > into mce again and then loop happens. And also the set_mce_nospec()
    > > > here is not proper, may refer to commit fd0e786d9d09 ("x86/mm,
    > > > mm/hwpoison: Don't unconditionally unmap kernel 1:1 pages").
    > > >
    > > > For other cases which care the return value of memory_failure() should
    > > > check why they want to process a memory error which have already been
    > > > processed. This behavior seems reasonable.
    > >
    > > Other reviewers shared ideas about the returned value, but actually
    > > I'm not sure which the best one is (EBUSY is not that bad).
    > > What we need to fix the reported issue is to return non-zero value
    > > for "already poisoned" case (the value itself is not so important).
    > >
    > > Other callers of memory_failure() (mostly test programs) could see
    > > the change of return value, but they can already see EBUSY now and
    > > anyway they should check dmesg for more detail about why failed,
    > > so the impact of the change is not so big.
    > >
    > > >
    > > > Signed-off-by: Aili Yao <yaoaili@kingsoft.com>
    > >
    > > Reviewed-by: Naoya Horiguchi <naoya.horiguchi@nec.com>
    >
    > I think that both this and my "add a mutex" patch are both
    > too simplistic for this complex problem :-(
    >
    > When multiple CPUs race to call memory_failure() for the same
    > page we need the following results:
    >
    > 1) Poison page should be marked not-present in all tasks
    > I think the mutex patch achieves this as long as
    > memory_failure() doesn't hit an error[1].

    My assumption is that reserved kernel pages is not supposed to be mapped to any
    process, so once memory_failure() judges a page as such, we never mark any page
    table entry to hwpoison entry, is that correct? So my question is why some
    user-mapped page was judged as "reserved kernel page". Futex allows such a situation?

    I personally tried some testcase crossing futex and hwpoison, but I can't
    reproduced "reserved kernel page" case. If possible, could you provide me
    with a little more detail about your testcase?

    >
    > 2) All tasks that were executing an instruction that was accessing
    > the poison location should see a SIGBUS with virtual address and
    > BUS_MCEERR_AR signature in siginfo.
    > Neither solution achieves this. The -EBUSY return ensures
    > that there is a SIGBUS for the tasks that get the -EBUSY
    > return, but no siginfo details.

    Yes, that's not yet perfect but avoiding MCE loop is a progress.

    > Just the mutex patch *might* have BUS_MCEERR_AO signature
    > to the race losing tasks, but only if they have PF_MCE_EARLY
    > set (so says the comment in kill_proc() ... but I don't
    > see the code checking for that bit).

    commit 30c9cf49270 might explain this, task_early_kill() got to call
    find_early_kill_thread() (checking PF_MCE_EARLY) in this case.

    >
    > #2 seems hard to achieve ... there are inherent races that mean the
    > AO SIGBUS could have been queued to the task before it even hits
    > the poison.

    So I feel that we might want some change on memory_failure() to send
    SIGBUS(BUS_MCEERR_AR) to "race losing tasks" within the new mutex.
    I agree that how we find the error address it also a problem.
    For now, I still have no better idea than page table walk.

    >
    > Maybe should include a non-action:
    >
    > 3) A task should only see one SIGBUS per poison?
    > Not sure if this is achievable either ... what if the task
    > has the same page mapped multiple times?

    My thought is that hwpoison-aware applications could have dedlicated thread
    for SIGBUS handling, so it's better to be prepared for multiple signals for
    the same error event.

    Thanks,
    Naoya Horiguchi
    \
     
     \ /
      Last update: 2021-03-10 09:07    [W:3.098 / U:0.712 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site