lkml.org 
[lkml]   [2024]   [Feb]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH v8 16/16] x86/sev: Enable Secure TSC for SNP guests
Date
Now that all the required plumbing is done for enabling SNP Secure TSC
feature, add Secure TSC to snp features present list.

Set the CPUID feature bit (X86_FEATURE_SNP_SECURE_TSC) when SNP guest is
started with Secure TSC.

Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
Tested-by: Peter Gonda <pgonda@google.com>
---
arch/x86/boot/compressed/sev.c | 3 ++-
arch/x86/mm/mem_encrypt.c | 10 ++++++++--
arch/x86/mm/mem_encrypt_amd.c | 4 +++-
3 files changed, 13 insertions(+), 4 deletions(-)

diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
index 073291832f44..d7e28084333a 100644
--- a/arch/x86/boot/compressed/sev.c
+++ b/arch/x86/boot/compressed/sev.c
@@ -379,7 +379,8 @@ static void enforce_vmpl0(void)
* by the guest kernel. As and when a new feature is implemented in the
* guest kernel, a corresponding bit should be added to the mask.
*/
-#define SNP_FEATURES_PRESENT MSR_AMD64_SNP_DEBUG_SWAP
+#define SNP_FEATURES_PRESENT (MSR_AMD64_SNP_DEBUG_SWAP | \
+ MSR_AMD64_SNP_SECURE_TSC)

u64 snp_get_unsupported_features(u64 status)
{
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 68aa06852466..350ba605509d 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -70,8 +70,14 @@ static void print_mem_encrypt_feature_info(void)
pr_cont(" SEV-ES");

/* Secure Nested Paging */
- if (cc_platform_has(CC_ATTR_GUEST_SEV_SNP))
- pr_cont(" SEV-SNP");
+ if (cc_platform_has(CC_ATTR_GUEST_SEV_SNP)) {
+ pr_cont(" SEV-SNP\n");
+ pr_cont("SNP Features active: ");
+
+ /* SNP Secure TSC */
+ if (cpu_feature_enabled(X86_FEATURE_SNP_SECURE_TSC))
+ pr_cont(" SECURE-TSC");
+ }

pr_cont("\n");
break;
diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c
index cc936999efc8..7ee0a537a22e 100644
--- a/arch/x86/mm/mem_encrypt_amd.c
+++ b/arch/x86/mm/mem_encrypt_amd.c
@@ -500,8 +500,10 @@ void __init sme_early_init(void)
ia32_disable();

/* Mark the TSC as reliable when Secure TSC is enabled */
- if (sev_status & MSR_AMD64_SNP_SECURE_TSC)
+ if (sev_status & MSR_AMD64_SNP_SECURE_TSC) {
+ setup_force_cpu_cap(X86_FEATURE_SNP_SECURE_TSC);
setup_force_cpu_cap(X86_FEATURE_TSC_RELIABLE);
+ }
}

void __init mem_encrypt_free_decrypted_mem(void)
--
2.34.1

\
 
 \ /
  Last update: 2024-05-27 15:05    [W:1.425 / U:1.152 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site