lkml.org 
[lkml]   [2019]   [May]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] x86/mm: Clean up the __[PHYSICAL/VIRTUAL]_MASK_SHIFT definitions a bit
On Thu, May 09, 2019 at 11:01:31AM +0200, Ingo Molnar wrote:
>
> * Yury Norov <yury.norov@gmail.com> wrote:
>
> > __VIRTUAL_MASK_SHIFT is defined twice to the same valie in
> > arch/x86/include/asm/page_32_types.h. Fix it.
> >
> > Signed-off-by: Yury Norov <ynorov@marvell.com>
> > ---
> > arch/x86/include/asm/page_32_types.h | 5 ++---
> > 1 file changed, 2 insertions(+), 3 deletions(-)
> >
> > diff --git a/arch/x86/include/asm/page_32_types.h b/arch/x86/include/asm/page_32_types.h
> > index 0d5c739eebd7..9bfac5c80d89 100644
> > --- a/arch/x86/include/asm/page_32_types.h
> > +++ b/arch/x86/include/asm/page_32_types.h
> > @@ -28,6 +28,8 @@
> > #define MCE_STACK 0
> > #define N_EXCEPTION_STACKS 1
> >
> > +#define __VIRTUAL_MASK_SHIFT 32
> > +
> > #ifdef CONFIG_X86_PAE
> > /*
> > * This is beyond the 44 bit limit imposed by the 32bit long pfns,
> > @@ -36,11 +38,8 @@
> > * The real limit is still 44 bits.
> > */
> > #define __PHYSICAL_MASK_SHIFT 52
> > -#define __VIRTUAL_MASK_SHIFT 32
> > -
> > #else /* !CONFIG_X86_PAE */
> > #define __PHYSICAL_MASK_SHIFT 32
> > -#define __VIRTUAL_MASK_SHIFT 32
> > #endif /* CONFIG_X86_PAE */
>
> I think it's clearer to see them defined where the physical mask shift is
> defined.
>
> How about the patch below? It does away with the weird formatting and
> cleans up both the comments and the style of the definition:
>
> /*
> * 52 bits on PAE is beyond the 44-bit limit imposed by the
> * 32-bit long PFNs, but we need the full mask to make sure
> * inverted PROT_NONE entries have all the host bits set
> * in a guest. The real limit is still 44 bits.
> */
> #ifdef CONFIG_X86_PAE
> # define __PHYSICAL_MASK_SHIFT 52
> # define __VIRTUAL_MASK_SHIFT 32
> #else
> # define __PHYSICAL_MASK_SHIFT 32
> # define __VIRTUAL_MASK_SHIFT 32
> #endif
>
> ?

My main concern was about double definition. It pretty looks like a
bug. But if it's intentional, I'm OK. In the patch below, could you
please add some note to the comment that __VIRTUAL_MASK_SHIFT defined
twice intentionally?

Thanks,
Yury

> Thanks,
>
> Ingo
>
> ===============>
> From: Ingo Molnar <mingo@kernel.org>
> Date: Thu, 9 May 2019 10:59:44 +0200
> Subject: [PATCH] x86/mm: Clean up the __[PHYSICAL/VIRTUAL]_MASK_SHIFT definitions a bit
>
> Signed-off-by: Ingo Molnar <mingo@kernel.org>
> ---
> arch/x86/include/asm/page_32_types.h | 23 +++++++++++------------
> 1 file changed, 11 insertions(+), 12 deletions(-)
>
> diff --git a/arch/x86/include/asm/page_32_types.h b/arch/x86/include/asm/page_32_types.h
> index 565ad755c785..009e96d4b6d4 100644
> --- a/arch/x86/include/asm/page_32_types.h
> +++ b/arch/x86/include/asm/page_32_types.h
> @@ -26,20 +26,19 @@
>
> #define N_EXCEPTION_STACKS 1
>
> -#ifdef CONFIG_X86_PAE
> /*
> - * This is beyond the 44 bit limit imposed by the 32bit long pfns,
> - * but we need the full mask to make sure inverted PROT_NONE
> - * entries have all the host bits set in a guest.
> - * The real limit is still 44 bits.
> + * 52 bits on PAE is beyond the 44-bit limit imposed by the
> + * 32-bit long PFNs, but we need the full mask to make sure
> + * inverted PROT_NONE entries have all the host bits set
> + * in a guest. The real limit is still 44 bits.
> */
> -#define __PHYSICAL_MASK_SHIFT 52
> -#define __VIRTUAL_MASK_SHIFT 32
> -
> -#else /* !CONFIG_X86_PAE */
> -#define __PHYSICAL_MASK_SHIFT 32
> -#define __VIRTUAL_MASK_SHIFT 32
> -#endif /* CONFIG_X86_PAE */
> +#ifdef CONFIG_X86_PAE
> +# define __PHYSICAL_MASK_SHIFT 52
> +# define __VIRTUAL_MASK_SHIFT 32
> +#else
> +# define __PHYSICAL_MASK_SHIFT 32
> +# define __VIRTUAL_MASK_SHIFT 32
> +#endif
>
> /*
> * Kernel image size is limited to 512 MB (see in arch/x86/kernel/head_32.S)

\
 
 \ /
  Last update: 2019-05-09 19:23    [W:2.418 / U:0.032 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site