lkml.org 
[lkml]   [2020]   [Mar]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [rfc 5/6] dma-direct: atomic allocations must come from unencrypted pools
On Sun, Mar 01, 2020 at 04:05:23PM -0800, David Rientjes wrote:
> When AMD memory encryption is enabled, all non-blocking DMA allocations
> must originate from the atomic pools depending on the device and the gfp
> mask of the allocation.
>
> Keep all memory in these pools unencrypted.
>
> Signed-off-by: David Rientjes <rientjes@google.com>
> ---
> arch/x86/Kconfig | 1 +
> kernel/dma/direct.c | 9 ++++-----
> kernel/dma/remap.c | 2 ++
> 3 files changed, 7 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -1523,6 +1523,7 @@ config X86_CPA_STATISTICS
> config AMD_MEM_ENCRYPT
> bool "AMD Secure Memory Encryption (SME) support"
> depends on X86_64 && CPU_SUP_AMD
> + select DMA_DIRECT_REMAP

I think we need to split the pool from remapping so that we don't drag
in the remap code for x86.

> if (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
> - dma_alloc_need_uncached(dev, attrs) &&

We still need a check here for either uncached or memory encryption.

> @@ -141,6 +142,7 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size,
> if (!addr)
> goto free_page;
>
> + set_memory_decrypted((unsigned long)page_to_virt(page), nr_pages);

This probably warrants a comment.

Also I think the infrastructure changes should be split from the x86
wire up.

\
 
 \ /
  Last update: 2020-03-05 16:46    [W:0.234 / U:1.652 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site