lkml.org 
[lkml]   [2017]   [Jun]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCHv2 1/3] x86/mm: Provide pmdp_establish() helper
From
Date
Kirill A. Shutemov <kirill.shutemov@linux.intel.com> wrote:

> We need an atomic way to setup pmd page table entry, avoiding races with
> CPU setting dirty/accessed bits. This is required to implement
> pmdp_invalidate() that doesn't loose these bits.
>
> On PAE we have to use cmpxchg8b as we cannot assume what is value of new pmd and
> setting it up half-by-half can expose broken corrupted entry to CPU.

...

>
> +#ifndef pmdp_establish
> +#define pmdp_establish pmdp_establish
> +static inline pmd_t pmdp_establish(pmd_t *pmdp, pmd_t pmd)
> +{
> + if (IS_ENABLED(CONFIG_SMP)) {
> + return xchg(pmdp, pmd);
> + } else {
> + pmd_t old = *pmdp;
> + *pmdp = pmd;

I think you may want to use WRITE_ONCE() here - otherwise nobody guarantees
that the compiler will not split writes to *pmdp. Although the kernel uses
similar code to setting PTEs and PMDs, I think that it is best to start
fixing it. Obviously, you might need a different code path for 32-bit
kernels.

Regards,
Nadav
\
 
 \ /
  Last update: 2017-06-19 19:12    [W:0.184 / U:0.060 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site