lkml.org 
[lkml]   [2019]   [Dec]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 5.3 175/180] s390/smp,vdso: fix ASCE handling
    Date
    From: Heiko Carstens <heiko.carstens@de.ibm.com>

    commit a2308c11ecbc3471ebb7435ee8075815b1502ef0 upstream.

    When a secondary CPU is brought up it must initialize its control
    registers. CPU A which triggers that a secondary CPU B is brought up
    stores its control register contents into the lowcore of new CPU B,
    which then loads these values on startup.

    This is problematic in various ways: the control register which
    contains the home space ASCE will correctly contain the kernel ASCE;
    however control registers for primary and secondary ASCEs are
    initialized with whatever values were present in CPU A.

    Typically:
    - the primary ASCE will contain the user process ASCE of the process
    that triggered onlining of CPU B.
    - the secondary ASCE will contain the percpu VDSO ASCE of CPU A.

    Due to lazy ASCE handling we may also end up with other combinations.

    When then CPU B switches to a different process (!= idle) it will
    fixup the primary ASCE. However the problem is that the (wrong) ASCE
    from CPU A was loaded into control register 1: as soon as an ASCE is
    attached (aka loaded) a CPU is free to generate TLB entries using that
    address space.
    Even though it is very unlikey that CPU B will actually generate such
    entries, this could result in TLB entries of the address space of the
    process that ran on CPU A. These entries shouldn't exist at all and
    could cause problems later on.

    Furthermore the secondary ASCE of CPU B will not be updated correctly.
    This means that processes may see wrong results or even crash if they
    access VDSO data on CPU B. The correct VDSO ASCE will eventually be
    loaded on return to user space as soon as the kernel executed a call
    to strnlen_user or an atomic futex operation on CPU B.

    Fix both issues by intializing the to be loaded control register
    contents with the correct ASCEs and also enforce (re-)loading of the
    ASCEs upon first context switch and return to user space.

    Fixes: 0aaba41b58bc ("s390: remove all code using the access register mode")
    Cc: stable@vger.kernel.org # v4.15+
    Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
    Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

    ---
    arch/s390/kernel/smp.c | 5 +++++
    1 file changed, 5 insertions(+)

    --- a/arch/s390/kernel/smp.c
    +++ b/arch/s390/kernel/smp.c
    @@ -262,10 +262,13 @@ static void pcpu_prepare_secondary(struc
    lc->spinlock_index = 0;
    lc->percpu_offset = __per_cpu_offset[cpu];
    lc->kernel_asce = S390_lowcore.kernel_asce;
    + lc->user_asce = S390_lowcore.kernel_asce;
    lc->machine_flags = S390_lowcore.machine_flags;
    lc->user_timer = lc->system_timer =
    lc->steal_timer = lc->avg_steal_timer = 0;
    __ctl_store(lc->cregs_save_area, 0, 15);
    + lc->cregs_save_area[1] = lc->kernel_asce;
    + lc->cregs_save_area[7] = lc->vdso_asce;
    save_access_regs((unsigned int *) lc->access_regs_save_area);
    memcpy(lc->stfle_fac_list, S390_lowcore.stfle_fac_list,
    sizeof(lc->stfle_fac_list));
    @@ -816,6 +819,8 @@ static void smp_init_secondary(void)

    S390_lowcore.last_update_clock = get_tod_clock();
    restore_access_regs(S390_lowcore.access_regs_save_area);
    + set_cpu_flag(CIF_ASCE_PRIMARY);
    + set_cpu_flag(CIF_ASCE_SECONDARY);
    cpu_init();
    preempt_disable();
    init_cpu_timer();

    \
     
     \ /
      Last update: 2019-12-16 19:29    [W:4.083 / U:0.824 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site