lkml.org 
[lkml]   [2017]   [Oct]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [Part1 PATCH v6 14/17] x86: Add support for changing memory encryption attribute in early boot
On Mon, Oct 16, 2017 at 10:34:20AM -0500, Brijesh Singh wrote:
> Some KVM-specific custom MSRs share the guest physical address with the
> hypervisor in early boot. When SEV is active, the shared physical address
> must be mapped with memory encryption attribute cleared so that both
> hypervisor and guest can access the data.
>
> Add APIs to change the memory encryption attribute in early boot code.
>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Borislav Petkov <bp@suse.de>
> Cc: x86@kernel.org
> Cc: linux-kernel@vger.kernel.org
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> ---
>
> Changes since v5:
>
> early_set_memory_enc_dec() is enhanced to perform encrypt/decrypt and change
> the C bit in one go. The changes shields the caller from having to check
> the C bit status before changing it and also shield the OS from converting a
> page blindly.
>
> Boris,
>
> I removed your R-b since I was not sure if you are okay with the above changes.
> please let me know if you are okay with the changes. thanks

Looks ok, you can readd it, here are only minor comment corrections.
Just send v6.1 as a reply to this message.

Thx.

---
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index b671e91e6a1f..53d11b4d74b7 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -291,7 +291,7 @@ static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc)
else
pgprot_val(new_prot) &= ~_PAGE_ENC;

- /* if prot is same then do nothing */
+ /* If prot is same then do nothing. */
if (pgprot_val(old_prot) == pgprot_val(new_prot))
return;

@@ -299,19 +299,19 @@ static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc)
size = page_level_size(level);

/*
- * We are going to perform in-place encrypt/decrypt and change the
+ * We are going to perform in-place en-/decryption and change the
* physical page attribute from C=1 to C=0 or vice versa. Flush the
- * caches to ensure that data gets accessed with correct C-bit.
+ * caches to ensure that data gets accessed with the correct C-bit.
*/
clflush_cache_range(__va(pa), size);

- /* encrypt/decrypt the contents in-place */
+ /* Encrypt/decrypt the contents in-place */
if (enc)
sme_early_encrypt(pa, size);
else
sme_early_decrypt(pa, size);

- /* change the page encryption mask */
+ /* Change the page encryption mask. */
new_pte = pfn_pte(pfn, new_prot);
set_pte_atomic(kpte, new_pte);
}
@@ -322,8 +322,8 @@ static int __init early_set_memory_enc_dec(resource_size_t paddr,
unsigned long vaddr, vaddr_end, vaddr_next;
unsigned long psize, pmask;
int split_page_size_mask;
- pte_t *kpte;
int level, ret;
+ pte_t *kpte;

vaddr = (unsigned long)__va(paddr);
vaddr_next = vaddr;
@@ -346,24 +346,23 @@ static int __init early_set_memory_enc_dec(resource_size_t paddr,
pmask = page_level_mask(level);

/*
- * Check, whether we can change the large page in one go.
- * We request a split, when the address is not aligned and
+ * Check whether we can change the large page in one go.
+ * We request a split when the address is not aligned and
* the number of pages to set/clear encryption bit is smaller
* than the number of pages in the large page.
*/
if (vaddr == (vaddr & pmask) &&
- ((vaddr_end - vaddr) >= psize)) {
+ ((vaddr_end - vaddr) >= psize)) {
__set_clr_pte_enc(kpte, level, enc);
vaddr_next = (vaddr & pmask) + psize;
continue;
}

/*
- * virtual address is part of large page, create the page table
- * mapping to use smaller pages (4K or 2M). If virtual address
- * is part of 2M page the we request spliting the large page
- * into 4K, similarly 1GB large page is requested to split into
- * 2M pages.
+ * The virtual address is part of a larger page, create the next
+ * level page table mapping (4K or 2M). If it is part of a 2M
+ * page then we request a split of the large page into 4K
+ * chunks. A 1GB large page is split into 2M pages, resp.
*/
if (level == PG_LEVEL_2M)
split_page_size_mask = 0;
@@ -376,6 +375,7 @@ static int __init early_set_memory_enc_dec(resource_size_t paddr,
}

ret = 0;
+
out:
__flush_tlb_all();
return ret;
--
Regards/Gruss,
Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

\
 
 \ /
  Last update: 2017-10-16 20:21    [W:0.816 / U:0.152 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site