lkml.org 
[lkml]   [2018]   [Mar]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [RFC, PATCH 19/22] x86/mm: Implement free_encrypt_page()
From
Date
On 03/05/2018 08:26 AM, Kirill A. Shutemov wrote:
> +void free_encrypt_page(struct page *page, int keyid, unsigned int order)
> +{
> + int i;
> + void *v;
> +
> + for (i = 0; i < (1 << order); i++) {
> + v = kmap_atomic_keyid(page, keyid + i);
> + /* See comment in prep_encrypt_page() */
> + clflush_cache_range(v, PAGE_SIZE);
> + kunmap_atomic(v);
> + }
> +}

Have you measured how slow this is?

It's an optimization, but can we find a way to only do this dance when
we *actually* change the keyid? Right now, we're doing mapping at alloc
and free, clflushing at free and zeroing at alloc. Let's say somebody does:

ptr = malloc(PAGE_SIZE);
*ptr = foo;
free(ptr);

ptr = malloc(PAGE_SIZE);
*ptr = bar;
free(ptr);

And let's say ptr is in encrypted memory and that we actually munmap()
at free(). We can theoretically skip the clflush, right?

\
 
 \ /
  Last update: 2018-03-05 20:07    [W:0.611 / U:1.252 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site