lkml.org 
[lkml]   [2019]   [Jul]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v3 5/9] x86/mm/tlb: Privatize cpu_tlbstate
From
Date
On 7/18/19 5:58 PM, Nadav Amit wrote:
> +struct tlb_state_shared {
> + /*
> + * We can be in one of several states:
> + *
> + * - Actively using an mm. Our CPU's bit will be set in
> + * mm_cpumask(loaded_mm) and is_lazy == false;
> + *
> + * - Not using a real mm. loaded_mm == &init_mm. Our CPU's bit
> + * will not be set in mm_cpumask(&init_mm) and is_lazy == false.
> + *
> + * - Lazily using a real mm. loaded_mm != &init_mm, our bit
> + * is set in mm_cpumask(loaded_mm), but is_lazy == true.
> + * We're heuristically guessing that the CR3 load we
> + * skipped more than makes up for the overhead added by
> + * lazy mode.
> + */
> + bool is_lazy;
> +};
> +DECLARE_PER_CPU_SHARED_ALIGNED(struct tlb_state_shared, cpu_tlbstate_shared);

Could we get a comment about what "shared" means and why we need shared
state?

Should we change 'tlb_state' to 'tlb_state_private'?

\
 
 \ /
  Last update: 2019-07-19 20:38    [W:0.297 / U:0.236 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site