lkml.org 
[lkml]   [2020]   [Feb]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] mm: fix a data race in put_page()
From
Date
On 06.02.20 14:17, Qian Cai wrote:
> page->flags could be accessed concurrently as noticied by KCSAN,
>
> BUG: KCSAN: data-race in page_cpupid_xchg_last / put_page
>
> write (marked) to 0xfffffc0d48ec1a00 of 8 bytes by task 91442 on cpu 3:
> page_cpupid_xchg_last+0x51/0x80
> page_cpupid_xchg_last at mm/mmzone.c:109 (discriminator 11)
> wp_page_reuse+0x3e/0xc0
> wp_page_reuse at mm/memory.c:2453
> do_wp_page+0x472/0x7b0
> do_wp_page at mm/memory.c:2798
> __handle_mm_fault+0xcb0/0xd00
> handle_pte_fault at mm/memory.c:4049
> (inlined by) __handle_mm_fault at mm/memory.c:4163
> handle_mm_fault+0xfc/0x2f0
> handle_mm_fault at mm/memory.c:4200
> do_page_fault+0x263/0x6f9
> do_user_addr_fault at arch/x86/mm/fault.c:1465
> (inlined by) do_page_fault at arch/x86/mm/fault.c:1539
> page_fault+0x34/0x40
>
> read to 0xfffffc0d48ec1a00 of 8 bytes by task 94817 on cpu 69:
> put_page+0x15a/0x1f0
> page_zonenum at include/linux/mm.h:923
> (inlined by) is_zone_device_page at include/linux/mm.h:929
> (inlined by) page_is_devmap_managed at include/linux/mm.h:948
> (inlined by) put_page at include/linux/mm.h:1023
> wp_page_copy+0x571/0x930
> wp_page_copy at mm/memory.c:2615
> do_wp_page+0x107/0x7b0
> __handle_mm_fault+0xcb0/0xd00
> handle_mm_fault+0xfc/0x2f0
> do_page_fault+0x263/0x6f9
> page_fault+0x34/0x40
>
> Reported by Kernel Concurrency Sanitizer on:
> CPU: 69 PID: 94817 Comm: systemd-udevd Tainted: G W O L 5.5.0-next-20200204+ #6
> Hardware name: HPE ProLiant DL385 Gen10/ProLiant DL385 Gen10, BIOS A40 07/10/2019
>
> Both the read and write are done only with the non-exclusive mmap_sem
> held. Since the read will check for specific bits (up to three bits for
> now) in the flag, load tearing could in theory trigger a logic bug.
>
> To fix it, it could introduce put_page_lockless() in those places but
> that could be an overkill, and difficult to use. Thus, just add
> READ_ONCE() for the read in page_zonenum() for now where it should not
> affect the performance and correctness with a small trade-off that
> compilers might generate less efficient optimization in some places.
>
> Signed-off-by: Qian Cai <cai@lca.pw>
> ---
> include/linux/mm.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 52269e56c514..f8529aa971c0 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -920,7 +920,7 @@ vm_fault_t alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
>
> static inline enum zone_type page_zonenum(const struct page *page)
> {
> - return (page->flags >> ZONES_PGSHIFT) & ZONES_MASK;
> + return (READ_ONCE(page->flags) >> ZONES_PGSHIFT) & ZONES_MASK;

I can understand why other bits/flags might change, but not the zone
number? Nobody should be changing that without heavy locking (out of
memory hot(un)plug code). Or am I missing something? Can load tearing
actually produce an issue if these 3 bits will never change?

--
Thanks,

David / dhildenb

\
 
 \ /
  Last update: 2020-02-06 14:34    [W:0.052 / U:9.240 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site