lkml.org 
[lkml]   [2020]   [Sep]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 07/13] x86: Secure Launch kernel early boot stub
From
Date
On 9/24/20 1:38 PM, Arvind Sankar wrote:
> On Thu, Sep 24, 2020 at 10:58:35AM -0400, Ross Philipson wrote:
>> The Secure Launch (SL) stub provides the entry point for Intel TXT (and
>> later AMD SKINIT) to vector to during the late launch. The symbol
>> sl_stub_entry is that entry point and its offset into the kernel is
>> conveyed to the launching code using the MLE (Measured Launch
>> Environment) header in the structure named mle_header. The offset of the
>> MLE header is set in the kernel_info. The routine sl_stub contains the
>> very early late launch setup code responsible for setting up the basic
>> environment to allow the normal kernel startup_32 code to proceed. It is
>> also responsible for properly waking and handling the APs on Intel
>> platforms. The routine sl_main which runs after entering 64b mode is
>> responsible for measuring configuration and module information before
>> it is used like the boot params, the kernel command line, the TXT heap,
>> an external initramfs, etc.
>>
>> Signed-off-by: Ross Philipson <ross.philipson@oracle.com>
>
> Which version of the kernel is this based on?

git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

master branch

>
>> diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
>> index 97d37f0..42043bf 100644
>> --- a/arch/x86/boot/compressed/head_64.S
>> +++ b/arch/x86/boot/compressed/head_64.S
>> @@ -279,6 +279,21 @@ SYM_INNER_LABEL(efi32_pe_stub_entry, SYM_L_LOCAL)
>> SYM_FUNC_END(efi32_stub_entry)
>> #endif
>>
>> +#ifdef CONFIG_SECURE_LAUNCH
>> +SYM_FUNC_START(sl_stub_entry)
>> + /*
>> + * On entry, %ebx has the entry abs offset to sl_stub_entry. To
>> + * find the beginning of where we are loaded, sub off from the
>> + * beginning.
>> + */
>
> This requirement should be added to the documentation. Is it necessary
> or can this stub just figure out the address the same way as the other
> 32-bit entry points, using the scratch space in bootparams as a little
> stack?

It is based on the state of the BSP when TXT vectors to the measured
launch environment. It is documented in the TXT spec and the SDMs.

>
>> + leal (startup_32 - sl_stub_entry)(%ebx), %ebx
>> +
>> + /* More room to work in sl_stub in the text section */
>> + jmp sl_stub
>> +
>> +SYM_FUNC_END(sl_stub_entry)
>> +#endif
>> +
>> .code64
>> .org 0x200
>> SYM_CODE_START(startup_64)
>> @@ -537,6 +552,25 @@ SYM_FUNC_START_LOCAL_NOALIGN(.Lrelocated)
>> shrq $3, %rcx
>> rep stosq
>>
>> +#ifdef CONFIG_SECURE_LAUNCH
>> + /*
>> + * Have to do the final early sl stub work in 64b area.
>> + *
>> + * *********** NOTE ***********
>> + *
>> + * Several boot params get used before we get a chance to measure
>> + * them in this call. This is a known issue and we currently don't
>> + * have a solution. The scratch field doesn't matter and loadflags
>> + * have KEEP_SEGMENTS set by the stub code. There is no obvious way
>> + * to do anything about the use of kernel_alignment or init_size
>> + * though these seem low risk.
>> + */
>
> There are various fields in bootparams that depend on where the
> kernel/initrd and cmdline are loaded in memory. If the entire bootparams
> page is getting measured, does that mean they all have to be at fixed
> addresses on every boot?

Yes that is a very good point. In other places when measuring we make
sure to skip things like addresses and sizes of things outside of the
structure being measured. This needs to be done with boot params too.

>
> Also KEEP_SEGMENTS support is gone from the kernel since v5.7, since it
> was unused. startup_32 now always loads a GDT and then the segment
> registers. I think this should be ok for you as the only thing the flag
> used to do in the 64-bit kernel was to stop startup_32 from blindly
> loading __BOOT_DS into the segment registers before it had setup its own
> GDT.

Yea this was there to prevent that blind loading of __BOOT_DS. I see it
is gone so I will remove the comment and the place where the flag is set.

>
> For the 32-bit assembler code that's being added, tip/master now has
> changes that prevent the compressed kernel from having any runtime
> relocations. You'll need to revise some of the code and the data
> structures initial values to avoid creating relocations.

Could you elaborate on this some more? I am not sure I see places in the
secure launch asm that would be creating relocations like this.

Thank you,
Ross

>
> Thanks.
>

\
 
 \ /
  Last update: 2020-09-25 17:00    [W:2.568 / U:0.124 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site