lkml.org 
[lkml]   [2017]   [Dec]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: Review of KPTI patchset
On Sat, 30 Dec 2017, Mathieu Desnoyers wrote:

> Hi Thomas,
>
> Here is some feedback on the KPTI patchset. Sorry for not replying to the
> patch, I was not CC'd on the original email, and don't have it in my inbox.

I can bounce you 196 versions if you want.

> I notice that fill_ldt() sets the desc->type with "|= 1", whereas all
> other operations on the desc type are done with a type enum based on
> clearly defined bits. Is the hardcoded "1" on purpose ?

I don't understand your question. That code does not have any enum involved
at all:

desc->type = (info->read_exec_only ^ 1) << 1;
desc->type |= info->contents << 2;
/* Set the ACCESS bit so it can be mapped RO */
desc->type |= 1;

So the |= 1 is completely consistent with the rest of that code.

> arch/x86/include/asm/processor.h:
>
> "+ * With page table isolation enabled, we map the LDT in ... [stay tuned]"
>
> I look forward to publication of the next chapter containing the rest of
> this sentence. When is it due ? ;)

Don't know. Lost my crystal ball.

> +static void free_ldt_pgtables(struct mm_struct *mm)
> +{
> +#ifdef CONFIG_PAGE_TABLE_ISOLATION
> + struct mmu_gather tlb;
> + unsigned long start = LDT_BASE_ADDR;
> + unsigned long end = start + (1UL << PGDIR_SHIFT);
> +
> + if (!static_cpu_has(X86_FEATURE_PTI))
> + return;
> +
> + tlb_gather_mmu(&tlb, mm, start, end);
> + free_pgd_range(&tlb, start, end, start, end);
> + tlb_finish_mmu(&tlb, start, end);
> +#endif
>
> ^ AFAIK, the usual approach is to move the #ifdef outside of the function body,
> and have one empty function.

That really depends. If you have several functions that makes sense, if you
have only one, not so much.

> @@ -156,6 +271,12 @@ int ldt_dup_context(struct mm_struct *old_mm, struct mm_struct *mm)
> new_ldt->nr_entries * LDT_ENTRY_SIZE);
> finalize_ldt_struct(new_ldt);
>
> + retval = map_ldt_struct(mm, new_ldt, 0);
> + if (retval) {
> + free_ldt_pgtables(mm);
> + free_ldt_struct(new_ldt);
> + goto out_unlock;
> + }
> mm->context.ldt = new_ldt;
>
> out_unlock:
>
> ^ I don't get why it does "free_ldt_pgtables(mm)" on the mm argument, but
> it's not done in other error paths. Perhaps it's OK, but ownership seems
> non-obvious.

The pagetable for LDT is allocated and populated in the user space visible
part of a process PGDIR, which obviously is connected to the mm struct....

Which other error paths are you talking about?

> @@ -287,6 +413,18 @@ static int write_ldt(void __user *ptr, unsigned long bytecount, int oldmode)
> new_ldt->entries[ldt_info.entry_number] = ldt;
> finalize_ldt_struct(new_ldt);
>
> + /*
> + * If we are using PTI, map the new LDT into the userspace pagetables.
> + * If there is already an LDT, use the other slot so that other CPUs
> + * will continue to use the old LDT until install_ldt() switches
> + * them over to the new LDT.
> + */
> + error = map_ldt_struct(mm, new_ldt, old_ldt ? !old_ldt->slot : 0);
> + if (error) {
> + free_ldt_struct(old_ldt);
> + goto out_unlock;
> + }
> +
>
> ^ is it really "old_ldt" that we want freed on error here ? Or should it be
> "new_ldt" ?

Ouch. Yes, that wants to be new_ldt indeed.

> + /*
> + * Force the population of PMDs for not yet allocated per cpu
> + * memory like debug store buffers.
> + */
> + npages = sizeof(struct debug_store_buffers) / PAGE_SIZE;
> + for (; npages; npages--, cea += PAGE_SIZE)
> + cea_set_pte(cea, 0, PAGE_NONE);
>
> ^ the code above (in percpu_setup_debug_store()) depends on having
> struct debug_store_buffers's size being a multiple of PAGE_SIZE. A
> comment should be added near the structure declaration to document
> this requirement.

Hmm. There was a build_bug_on() somewhere which ensured that. That must
have been lost in one of the gazillion iterations.

> +static void __init pti_setup_espfix64(void)
> +{
> +#ifdef CONFIG_X86_ESPFIX64
> + pti_clone_p4d(ESPFIX_BASE_ADDR);
> +#endif
> +}
>
> Seeing how this ifdef within function layout is everywhere in the patch,
> I start to wonder whether I missed a coding style guideline somewhere... ?

I don't see how extra empty functions would improve that, but that's a
pointless debate.

> +/*
> + * We get here when we do something requiring a TLB invalidation
> + * but could not go invalidate all of the contexts. We do the
> + * necessary invalidation by clearing out the 'ctx_id' which
> + * forces a TLB flush when the context is loaded.
> + */
> +void clear_asid_other(void)
> +{
> + u16 asid;
> +
> + /*
> + * This is only expected to be set if we have disabled
> + * kernel _PAGE_GLOBAL pages.
> + */
> + if (!static_cpu_has(X86_FEATURE_PTI)) {
> + WARN_ON_ONCE(1);
> + return;
> + }
> +
> + for (asid = 0; asid < TLB_NR_DYN_ASIDS; asid++) {
> + /* Do not need to flush the current asid */
> + if (asid == this_cpu_read(cpu_tlbstate.loaded_mm_asid))
> + continue;
> + /*
> + * Make sure the next time we go to switch to
> + * this asid, we do a flush:
> + */
> + this_cpu_write(cpu_tlbstate.ctxs[asid].ctx_id, 0);
> + }
> + this_cpu_write(cpu_tlbstate.invalidate_other, false);
> +}
> Can this be called with preemption enabled ? If so, what happens
> if migrated ?

No, it can't and if it is then it's a bug and the smp_processor_id() debug
code will yell at you.

Thanks,

tglx

\
 
 \ /
  Last update: 2017-12-30 20:59    [W:1.724 / U:0.068 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site