lkml.org 
[lkml]   [2020]   [Nov]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH v2 08/24] kvm: arm64: Add SMC handler in nVHE EL2
On Mon, 16 Nov 2020 20:43:02 +0000,
David Brazdil <dbrazdil@google.com> wrote:
>
> Add handler of host SMCs in KVM nVHE trap handler. Forward all SMCs to
> EL3 and propagate the result back to EL1. This is done in preparation
> for validating host SMCs in KVM nVHE protected mode.
>
> The implementation assumes that firmware uses SMCCC v1.2 or older. That
> means x0-x17 can be used both for arguments and results, other GPRs are
> preserved.
>
> Signed-off-by: David Brazdil <dbrazdil@google.com>
> ---
> arch/arm64/kvm/hyp/nvhe/host.S | 38 ++++++++++++++++++++++++++++++
> arch/arm64/kvm/hyp/nvhe/hyp-main.c | 26 ++++++++++++++++++++
> 2 files changed, 64 insertions(+)
>
> diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
> index ed27f06a31ba..52dae5cd5a28 100644
> --- a/arch/arm64/kvm/hyp/nvhe/host.S
> +++ b/arch/arm64/kvm/hyp/nvhe/host.S
> @@ -183,3 +183,41 @@ SYM_CODE_START(__kvm_hyp_host_vector)
> invalid_host_el1_vect // FIQ 32-bit EL1
> invalid_host_el1_vect // Error 32-bit EL1
> SYM_CODE_END(__kvm_hyp_host_vector)
> +
> +/*
> + * Forward SMC with arguments in struct kvm_cpu_context, and
> + * store the result into the same struct. Assumes SMCCC 1.2 or older.
> + *
> + * x0: struct kvm_cpu_context*
> + */
> +SYM_CODE_START(__kvm_hyp_host_forward_smc)
> + /*
> + * Use x18 to keep a pointer to the host context because x18
> + * is callee-saved SMCCC but not in AAPCS64.
> + */
> + mov x18, x0
> +
> + ldp x0, x1, [x18, #CPU_XREG_OFFSET(0)]
> + ldp x2, x3, [x18, #CPU_XREG_OFFSET(2)]
> + ldp x4, x5, [x18, #CPU_XREG_OFFSET(4)]
> + ldp x6, x7, [x18, #CPU_XREG_OFFSET(6)]
> + ldp x8, x9, [x18, #CPU_XREG_OFFSET(8)]
> + ldp x10, x11, [x18, #CPU_XREG_OFFSET(10)]
> + ldp x12, x13, [x18, #CPU_XREG_OFFSET(12)]
> + ldp x14, x15, [x18, #CPU_XREG_OFFSET(14)]
> + ldp x16, x17, [x18, #CPU_XREG_OFFSET(16)]
> +
> + smc #0
> +
> + stp x0, x1, [x18, #CPU_XREG_OFFSET(0)]
> + stp x2, x3, [x18, #CPU_XREG_OFFSET(2)]
> + stp x4, x5, [x18, #CPU_XREG_OFFSET(4)]
> + stp x6, x7, [x18, #CPU_XREG_OFFSET(6)]
> + stp x8, x9, [x18, #CPU_XREG_OFFSET(8)]
> + stp x10, x11, [x18, #CPU_XREG_OFFSET(10)]
> + stp x12, x13, [x18, #CPU_XREG_OFFSET(12)]
> + stp x14, x15, [x18, #CPU_XREG_OFFSET(14)]
> + stp x16, x17, [x18, #CPU_XREG_OFFSET(16)]

This is going to be really good for CPUs that need to use ARCH_WA1 for
their Spectre-v2 mitigation... :-( If that's too expensive, we may
have to reduce the number of save/restored registers, but I'm worried
the battle is already lost by the time we reach this (the host trap
path is already a huge hammer).

Eventually, we'll have to insert the mitigation in the vectors anyway,
just like we have on the guest exit path. Boo.

Thanks,
M.

--
Without deviation from the norm, progress is not possible.

\
 
 \ /
  Last update: 2020-11-23 19:10    [W:0.251 / U:0.788 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site