lkml.org 
[lkml]   [2017]   [Apr]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v5 09/32] x86/mm: Provide general kernel support for memory encryption
From
Date
On 04/18/2017 02:17 PM, Tom Lendacky wrote:
> @@ -55,7 +57,7 @@ static inline void copy_user_page(void *to, void *from, unsigned long vaddr,
> __phys_addr_symbol(__phys_reloc_hide((unsigned long)(x)))
>
> #ifndef __va
> -#define __va(x) ((void *)((unsigned long)(x)+PAGE_OFFSET))
> +#define __va(x) ((void *)(__sme_clr(x) + PAGE_OFFSET))
> #endif

It seems wrong to be modifying __va(). It currently takes a physical
address, and this modifies it to take a physical address plus the SME bits.

How does that end up ever happening? If we are pulling physical
addresses out of the page tables, we use p??_phys(). I'd expect *those*
to be masking off the SME bits.

Is it these cases?

pgd_t *base = __va(read_cr3());

For those, it seems like we really want to create two modes of reading
cr3. One that truly reads CR3 and another that reads the pgd's physical
address out of CR3. Then you only do the SME masking on the one
fetching a physical address, and the SME bits never leak into __va().

\
 
 \ /
  Last update: 2017-04-21 23:53    [W:2.467 / U:0.964 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site