| Subject | Re: [RFC, PATCH 19/22] x86/mm: Implement free_encrypt_page() | From | Dave Hansen <> | Date | Mon, 5 Mar 2018 11:07:16 -0800 |
| |
On 03/05/2018 08:26 AM, Kirill A. Shutemov wrote: > +void free_encrypt_page(struct page *page, int keyid, unsigned int order) > +{ > + int i; > + void *v; > + > + for (i = 0; i < (1 << order); i++) { > + v = kmap_atomic_keyid(page, keyid + i); > + /* See comment in prep_encrypt_page() */ > + clflush_cache_range(v, PAGE_SIZE); > + kunmap_atomic(v); > + } > +}
Have you measured how slow this is?
It's an optimization, but can we find a way to only do this dance when we *actually* change the keyid? Right now, we're doing mapping at alloc and free, clflushing at free and zeroing at alloc. Let's say somebody does:
ptr = malloc(PAGE_SIZE); *ptr = foo; free(ptr);
ptr = malloc(PAGE_SIZE); *ptr = bar; free(ptr);
And let's say ptr is in encrypted memory and that we actually munmap() at free(). We can theoretically skip the clflush, right?
|