lkml.org 
[lkml]   [2015]   [Mar]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH] mm/x86: AMD Bulldozer ASLR fix
    Date
    A bug in Linux ASLR implementation which affects some AMD processors has been
    found. The issue affects to all Linux process even if they are not using
    shared libraries (statically compiled).

    The problem appears because some mmapped objects (VDSO, libraries, etc.)
    are poorly randomized in an attempt to avoid cache aliasing penalties for
    AMD Bulldozer (Family 15h) processors.

    Affected systems have reduced the mmapped files entropy by eight.

    The following output is the run on an AMD Opteron 62xx class CPU processor
    under x86_64 Linux 4.0.0:

    for i in `seq 1 10`; do cat /proc/self/maps | grep "r-xp.*libc" ; done
    b7588000-b7736000 r-xp 00000000 00:01 4924 /lib/i386-linux-gnu/libc.so.6
    b7570000-b771e000 r-xp 00000000 00:01 4924 /lib/i386-linux-gnu/libc.so.6
    b75d0000-b777e000 r-xp 00000000 00:01 4924 /lib/i386-linux-gnu/libc.so.6
    b75b0000-b775e000 r-xp 00000000 00:01 4924 /lib/i386-linux-gnu/libc.so.6
    b7578000-b7726000 r-xp 00000000 00:01 4924 /lib/i386-linux-gnu/libc.so.6

    As shown in the previous output, the bits 12, 13 and 14 are always 0. The address
    always ends in 0x8000 or 0x0000.

    The bug is caused by a hack to improve performance by avoiding cache aliasing
    penalties in the Family 15h of AMD Bulldozer processors (commit: dfb09f9b).

    32-bit systems are specially sensitive to this issue because the entropy for
    libraries is reduced from 2^8 to 2^5, which means that libraries only have 32
    different places where they can be loaded.

    This patch randomizes per boot the three affected bits, rather than setting
    them to zero. Since all the shared pages have the same value at the bits
    [12..14], there is no cache aliasing problems (which is supposed to be the
    cause of performance loss). On the other hand, since the value is not
    known by a potential remote attacker, the ASLR preserves its effectiveness.

    More details at:

    http://hmarco.org/bugs/AMD-Bulldozer-linux-ASLR-weakness-reducing-mmaped-files-by-eight.html


    Signed-off-by: Hector Marco-Gisbert <hecmargi@upv.es>
    Signed-off-by: Ismael Ripoll <iripoll@disca.upv.es>
    ---
    arch/x86/include/asm/elf.h | 1 +
    arch/x86/kernel/cpu/amd.c | 4 ++++
    arch/x86/kernel/sys_x86_64.c | 29 ++++++++++++++++++++++++++---
    3 files changed, 31 insertions(+), 3 deletions(-)

    diff --git a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h
    index ca3347a..bd292ce 100644
    --- a/arch/x86/include/asm/elf.h
    +++ b/arch/x86/include/asm/elf.h
    @@ -365,6 +365,7 @@ enum align_flags {
    struct va_alignment {
    int flags;
    unsigned long mask;
    + unsigned long bits;
    } ____cacheline_aligned;

    extern struct va_alignment va_align;
    diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
    index 15c5df9..b4d0ddd 100644
    --- a/arch/x86/kernel/cpu/amd.c
    +++ b/arch/x86/kernel/cpu/amd.c
    @@ -5,6 +5,7 @@

    #include <linux/io.h>
    #include <linux/sched.h>
    +#include <linux/random.h>
    #include <asm/processor.h>
    #include <asm/apic.h>
    #include <asm/cpu.h>
    @@ -488,6 +489,9 @@ static void bsp_init_amd(struct cpuinfo_x86 *c)

    va_align.mask = (upperbit - 1) & PAGE_MASK;
    va_align.flags = ALIGN_VA_32 | ALIGN_VA_64;
    +
    + /* A random value per boot for bit slice [12:upper_bit) */
    + va_align.bits = get_random_int() & va_align.mask;
    }
    }

    diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c
    index 30277e2..5b3e66e 100644
    --- a/arch/x86/kernel/sys_x86_64.c
    +++ b/arch/x86/kernel/sys_x86_64.c
    @@ -34,10 +34,25 @@ static unsigned long get_align_mask(void)
    return va_align.mask;
    }

    +/*
    + * To avoid aliasing in the I$ on AMD F15h, the bits defined by the
    + * va_align.mask, [12:upper_bit), are set to a random value instead of zeroing
    + * them. This random value is computed once per boot. This form of ASLR is known
    + * as "per-boot ASLR".
    + *
    + * To achieve this, the random value is added to the info.align_offset value
    + * before calling vm_unmapped_area() or ORed directly to the address.
    + */
    +static unsigned long get_align_bits(void)
    +{
    + return va_align.bits & get_align_mask();
    +}
    +
    unsigned long align_vdso_addr(unsigned long addr)
    {
    unsigned long align_mask = get_align_mask();
    - return (addr + align_mask) & ~align_mask;
    + addr = (addr + align_mask) & ~align_mask;
    + return addr | get_align_bits();
    }

    static int __init control_va_addr_alignment(char *str)
    @@ -135,8 +150,12 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
    info.length = len;
    info.low_limit = begin;
    info.high_limit = end;
    - info.align_mask = filp ? get_align_mask() : 0;
    + info.align_mask = 0;
    info.align_offset = pgoff << PAGE_SHIFT;
    + if (filp) {
    + info.align_mask = get_align_mask();
    + info.align_offset += get_align_bits();
    + }
    return vm_unmapped_area(&info);
    }

    @@ -174,8 +193,12 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
    info.length = len;
    info.low_limit = PAGE_SIZE;
    info.high_limit = mm->mmap_base;
    - info.align_mask = filp ? get_align_mask() : 0;
    + info.align_mask = 0;
    info.align_offset = pgoff << PAGE_SHIFT;
    + if (filp) {
    + info.align_mask = get_align_mask();
    + info.align_offset += get_align_bits();
    + }
    addr = vm_unmapped_area(&info);
    if (!(addr & ~PAGE_MASK))
    return addr;
    --
    1.9.1


    \
     
     \ /
      Last update: 2015-03-27 13:01    [W:4.707 / U:0.008 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site