Messages in this thread | | On Thu, Jun 22, 2017 at 9:09 AM, Nadav Amit <nadav.amit@gmail.com> wrote: > Andy Lutomirski <luto@kernel.org> wrote: > >> >> --- a/arch/x86/mm/init.c >> +++ b/arch/x86/mm/init.c >> @@ -812,6 +812,7 @@ void __init zone_sizes_init(void) >> >> DEFINE_PER_CPU_SHARED_ALIGNED(struct tlb_state, cpu_tlbstate) = { >> .loaded_mm = &init_mm, >> + .next_asid = 1, > > I think this is a remainder from previous version of the patches, no? It > does not seem necessary and may be confusing (ctx_id 0 is reserved, but not > asid 0).
Hmm. It's no longer needed for correctness, but init_mm still lands in slot 0, and it seems friendly to avoid immediately stomping it. Admittedly, this won't make any practical difference since it'll only happen once per cpu.
> > Other than that, if you want, you can put for the entire series: > > Reviewed-by: Nadav Amit <nadav.amit@gmail.com> >
Thanks!
| |