lkml.org 
[lkml]   [2017]   [Jul]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH v2] X86: don't report PAT on CPUs that don't support it
On Mon, 3 Jul 2017, Mikulas Patocka wrote:
> Is there any progress with this patch? Will you accept it or do you want
> some changes to it?

Aside of the unparseable changelog, that patch is mostly duct tape.

1) __pat_enabled should be renamed to pat_disabled, as that is the actual
purpose of that variable

2) Making the call to init_cache_modes() conditional in setup_arch() is
pointless. init_cache_modes() has it's own protection against multiple
invocations.

3) It adds yet another invocation of init_cache_modes() instead of getting
rid of the ones in pat_disable() and the pat disabled case in pat_init().

I've reworked the whole thing into the patch below.

Thanks,

tglx

8<---------------------
Subject: x86/mm/pat: Don't report PAT on CPUs that don't support it
From: Mikulas Patocka <mpatocka@redhat.com>
Date: Tue, 6 Jun 2017 18:49:39 -0400 (EDT)

The pat_enabled() logic is broken on CPUs which do not support PAT and
where the initialization code fails to call pat_init(). Due to that the
enabled flag stays true and pat_enabled() returns true wrongfully.

As a consequence the mappings, e.g. for Xorg, are set up with the wrong
caching mode and the required MTRR setups are omitted.

To cure this the following changes are required:

1) Make pat_enabled() return true only if PAT initialization was
invoked and successful.

2) Invoke init_cache_modes() unconditionally in setup_arch() and
remove the extra callsites in pat_disable() and the pat disabled
code path in pat_init().

Also rename __pat_enabled to pat_disabled to reflect the real purpose of
this variable.

Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Cc: Bernhard Held <berny156@gmx.de>
Cc: Toshi Kani <toshi.kani@hp.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: "Luis R. Rodriguez" <mcgrof@suse.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>

---
arch/x86/include/asm/pat.h | 1 +
arch/x86/kernel/setup.c | 7 +++++++
arch/x86/mm/pat.c | 22 +++++++++-------------
3 files changed, 17 insertions(+), 13 deletions(-)

--- a/arch/x86/include/asm/pat.h
+++ b/arch/x86/include/asm/pat.h
@@ -7,6 +7,7 @@
bool pat_enabled(void);
void pat_disable(const char *reason);
extern void pat_init(void);
+extern void init_cache_modes(void);

extern int reserve_memtype(u64 start, u64 end,
enum page_cache_mode req_pcm, enum page_cache_mode *ret_pcm);
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -1076,6 +1076,13 @@ void __init setup_arch(char **cmdline_p)
max_possible_pfn = max_pfn;

/*
+ * This call is required when the CPU does not support PAT. If
+ * mtrr_bp_init() invoked it already via pat_init() the call has no
+ * effect.
+ */
+ init_cache_modes();
+
+ /*
* Define random base addresses for memory sections after max_pfn is
* defined and before each memory section base is used.
*/
--- a/arch/x86/mm/pat.c
+++ b/arch/x86/mm/pat.c
@@ -37,14 +37,13 @@
#undef pr_fmt
#define pr_fmt(fmt) "" fmt

-static bool boot_cpu_done;
-
-static int __read_mostly __pat_enabled = IS_ENABLED(CONFIG_X86_PAT);
-static void init_cache_modes(void);
+static bool __read_mostly boot_cpu_done;
+static bool __read_mostly pat_disabled = !IS_ENABLED(CONFIG_X86_PAT);
+static bool __read_mostly pat_initialized;

void pat_disable(const char *reason)
{
- if (!__pat_enabled)
+ if (pat_disabled)
return;

if (boot_cpu_done) {
@@ -52,10 +51,8 @@ void pat_disable(const char *reason)
return;
}

- __pat_enabled = 0;
+ pat_disabled = true;
pr_info("x86/PAT: %s\n", reason);
-
- init_cache_modes();
}

static int __init nopat(char *str)
@@ -67,7 +64,7 @@ early_param("nopat", nopat);

bool pat_enabled(void)
{
- return !!__pat_enabled;
+ return pat_initialized;
}
EXPORT_SYMBOL_GPL(pat_enabled);

@@ -225,6 +222,7 @@ static void pat_bsp_init(u64 pat)
}

wrmsrl(MSR_IA32_CR_PAT, pat);
+ __pat_initialized = true;

__init_cache_modes(pat);
}
@@ -242,7 +240,7 @@ static void pat_ap_init(u64 pat)
wrmsrl(MSR_IA32_CR_PAT, pat);
}

-static void init_cache_modes(void)
+void init_cache_modes(void)
{
u64 pat = 0;
static int init_cm_done;
@@ -306,10 +304,8 @@ void pat_init(void)
u64 pat;
struct cpuinfo_x86 *c = &boot_cpu_data;

- if (!pat_enabled()) {
- init_cache_modes();
+ if (pat_disabled)
return;
- }

if ((c->x86_vendor == X86_VENDOR_INTEL) &&
(((c->x86 == 0x6) && (c->x86_model <= 0xd)) ||

\
 
 \ /
  Last update: 2017-07-04 15:42    [W:0.419 / U:0.336 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site