lkml.org 
[lkml]   [2015]   [Mar]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH] mm/x86: AMD Bulldozer ASLR fix
From
On Fri, Mar 27, 2015 at 7:44 AM, Borislav Petkov <bp@alien8.de> wrote:
> On Fri, Mar 27, 2015 at 12:38:21PM +0100, Hector Marco-Gisbert wrote:
>> A bug in Linux ASLR implementation which affects some AMD processors has been
>> found. The issue affects to all Linux process even if they are not using
>> shared libraries (statically compiled).
>
> ...
>
>> Signed-off-by: Hector Marco-Gisbert <hecmargi@upv.es>
>> Signed-off-by: Ismael Ripoll <iripoll@disca.upv.es>
>
> How am I to interpret Ismael's SOB here?
>
> Did he write the patch, did he create it, ...?
>
> Because this SOB chain is incorrect in this form. We have only one
> author per commit. If you want to accredit Ismael, you can say:
>
> Originally-by: Ismael
> Based-on-a-patch-by: Ismael
> Suggested-by: ...
>
> and so on.
>
> Alternatively, you can describe in free text his involvement and thus
> have the proper attribution.
>
> Pending this, I have this final version queued:
>
> ---
> From: Hector Marco-Gisbert <hecmargi@upv.es>
> Date: Fri, 27 Mar 2015 12:38:21 +0100
> Subject: [PATCH] x86/mm: Improve AMD Bulldozer ASLR fix
>
> The ASLR implementation needs to special-case AMD F15h processors by
> clearing out bits [14:12] of the virtual address in order to avoid I$
> cross invalidations and thus performance penalty for certain workloads.
> For details, see:
>
> dfb09f9b7ab0 ("x86, amd: Avoid cache aliasing penalties on AMD family 15h")
>
> This special case reduces the mmapped files entropy by eight.
>
> The following output is the run on an AMD Opteron 62xx class CPU
> processor under x86_64 Linux 4.0.0:
>
> $ for i in `seq 1 10`; do cat /proc/self/maps | grep "r-xp.*libc" ; done
> b7588000-b7736000 r-xp 00000000 00:01 4924 /lib/i386-linux-gnu/libc.so.6
> b7570000-b771e000 r-xp 00000000 00:01 4924 /lib/i386-linux-gnu/libc.so.6
> b75d0000-b777e000 r-xp 00000000 00:01 4924 /lib/i386-linux-gnu/libc.so.6
> b75b0000-b775e000 r-xp 00000000 00:01 4924 /lib/i386-linux-gnu/libc.so.6
> b7578000-b7726000 r-xp 00000000 00:01 4924 /lib/i386-linux-gnu/libc.so.6
> ...
>
> Bits [12:14] are always 0, i.e. the address always ends in 0x8000 or
> 0x0000.
>
> 32-bit systems, as in the example above, are especially sensitive
> to this issue because 32-bit randomness for VA space is 8 bits (see
> mmap_rnd()). With the Bulldozer special case, this diminishes to only 32
> different slots of mmap virtual addresses.
>
> This patch randomizes per boot the three affected bits rather than
> setting them to zero. Since all the shared pages have the same value
> at bits [12..14], there is no cache aliasing problems. This value gets
> generated during system boot and it is thus not known to a potential
> remote attacker. Therefore, the impact from the Bulldozer workaround
> gets diminished and ASLR randomness increased.
>
> More details at:
>
> http://hmarco.org/bugs/AMD-Bulldozer-linux-ASLR-weakness-reducing-mmaped-files-by-eight.html
>
> Original white paper by AMD dealing with the issue:
>
> http://developer.amd.com/wordpress/media/2012/10/SharedL1InstructionCacheonAMD15hCPU.pdf
>
> Signed-off-by: Hector Marco-Gisbert <hecmargi@upv.es>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: x86-ml <x86@kernel.org>
> Cc: Alexander Viro <viro@zeniv.linux.org.uk>
> Cc: Jan-Simon <dl9pf@gmx.de>
> Cc: linux-fsdevel@vger.kernel.org
> Cc: Kees Cook <keescook@chromium.org>
> Link: http://lkml.kernel.org/r/1427456301-3764-1-git-send-email-hecmargi@upv.es
> Signed-off-by: Ismael Ripoll <iripoll@disca.upv.es>
> Signed-off-by: Borislav Petkov <bp@suse.de>

Acked-by: Kees Cook <keescook@chromium.org>

-Kees

> ---
> arch/x86/include/asm/elf.h | 1 +
> arch/x86/kernel/cpu/amd.c | 4 ++++
> arch/x86/kernel/sys_x86_64.c | 30 +++++++++++++++++++++++++++---
> 3 files changed, 32 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h
> index ca3347a9dab5..bd292ce9be0a 100644
> --- a/arch/x86/include/asm/elf.h
> +++ b/arch/x86/include/asm/elf.h
> @@ -365,6 +365,7 @@ enum align_flags {
> struct va_alignment {
> int flags;
> unsigned long mask;
> + unsigned long bits;
> } ____cacheline_aligned;
>
> extern struct va_alignment va_align;
> diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
> index a220239cea65..ec6a61b21b41 100644
> --- a/arch/x86/kernel/cpu/amd.c
> +++ b/arch/x86/kernel/cpu/amd.c
> @@ -5,6 +5,7 @@
>
> #include <linux/io.h>
> #include <linux/sched.h>
> +#include <linux/random.h>
> #include <asm/processor.h>
> #include <asm/apic.h>
> #include <asm/cpu.h>
> @@ -488,6 +489,9 @@ static void bsp_init_amd(struct cpuinfo_x86 *c)
>
> va_align.mask = (upperbit - 1) & PAGE_MASK;
> va_align.flags = ALIGN_VA_32 | ALIGN_VA_64;
> +
> + /* A random value per boot for bit slice [12:upper_bit) */
> + va_align.bits = get_random_int() & va_align.mask;
> }
> }
>
> diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c
> index 30277e27431a..10e0272d789a 100644
> --- a/arch/x86/kernel/sys_x86_64.c
> +++ b/arch/x86/kernel/sys_x86_64.c
> @@ -34,10 +34,26 @@ static unsigned long get_align_mask(void)
> return va_align.mask;
> }
>
> +/*
> + * To avoid aliasing in the I$ on AMD F15h, the bits defined by the
> + * va_align.bits, [12:upper_bit), are set to a random value instead of
> + * zeroing them. This random value is computed once per boot. This form
> + * of ASLR is known as "per-boot ASLR".
> + *
> + * To achieve this, the random value is added to the info.align_offset
> + * value before calling vm_unmapped_area() or ORed directly to the
> + * address.
> + */
> +static unsigned long get_align_bits(void)
> +{
> + return va_align.bits & get_align_mask();
> +}
> +
> unsigned long align_vdso_addr(unsigned long addr)
> {
> unsigned long align_mask = get_align_mask();
> - return (addr + align_mask) & ~align_mask;
> + addr = (addr + align_mask) & ~align_mask;
> + return addr | get_align_bits();
> }
>
> static int __init control_va_addr_alignment(char *str)
> @@ -135,8 +151,12 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
> info.length = len;
> info.low_limit = begin;
> info.high_limit = end;
> - info.align_mask = filp ? get_align_mask() : 0;
> + info.align_mask = 0;
> info.align_offset = pgoff << PAGE_SHIFT;
> + if (filp) {
> + info.align_mask = get_align_mask();
> + info.align_offset += get_align_bits();
> + }
> return vm_unmapped_area(&info);
> }
>
> @@ -174,8 +194,12 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
> info.length = len;
> info.low_limit = PAGE_SIZE;
> info.high_limit = mm->mmap_base;
> - info.align_mask = filp ? get_align_mask() : 0;
> + info.align_mask = 0;
> info.align_offset = pgoff << PAGE_SHIFT;
> + if (filp) {
> + info.align_mask = get_align_mask();
> + info.align_offset += get_align_bits();
> + }
> addr = vm_unmapped_area(&info);
> if (!(addr & ~PAGE_MASK))
> return addr;
> --
> 2.3.3
>
> --
> Regards/Gruss,
> Boris.
>
> ECO tip #101: Trim your mails when you reply.
> --



--
Kees Cook
Chrome OS Security


\
 
 \ /
  Last update: 2015-03-28 14:21    [W:0.101 / U:1.356 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site