lkml.org 
[lkml]   [2017]   [Jul]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Date
    Subject[PATCH 3.16 147/178] s390/mm: fix CMMA vs KSM vs others
    3.16.46-rc1 review patch.  If anyone has any objections, please let me know.

    ------------------

    From: Christian Borntraeger <borntraeger@de.ibm.com>

    commit a8f60d1fadf7b8b54449fcc9d6b15248917478ba upstream.

    On heavy paging with KSM I see guest data corruption. Turns out that
    KSM will add pages to its tree, where the mapping return true for
    pte_unused (or might become as such later). KSM will unmap such pages
    and reinstantiate with different attributes (e.g. write protected or
    special, e.g. in replace_page or write_protect_page)). This uncovered
    a bug in our pagetable handling: We must remove the unused flag as
    soon as an entry becomes present again.

    Signed-of-by: Christian Borntraeger <borntraeger@de.ibm.com>
    Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
    [bwh: Backported to 3.16: adjust context]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
    ---
    arch/s390/include/asm/pgtable.h | 2 ++
    1 file changed, 2 insertions(+)

    --- a/arch/s390/include/asm/pgtable.h
    +++ b/arch/s390/include/asm/pgtable.h
    @@ -868,6 +868,8 @@ static inline void set_pte_at(struct mm_
    {
    pgste_t pgste;

    + if (pte_present(entry))
    + pte_val(entry) &= ~_PAGE_UNUSED;
    if (mm_has_pgste(mm)) {
    pgste = pgste_get_lock(ptep);
    pgste_val(pgste) &= ~_PGSTE_GPS_ZERO;
    \
     
     \ /
      Last update: 2017-07-16 16:08    [W:4.071 / U:0.084 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site