lkml.org 
[lkml]   [2020]   [Nov]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v5 1/5] mm: introduce debug_pagealloc_{map,unmap}_pages() helpers
From
Date
On 11/8/20 7:57 AM, Mike Rapoport wrote:
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -1428,21 +1428,19 @@ static bool is_debug_pagealloc_cache(struct kmem_cache *cachep)
> return false;
> }
>
> -#ifdef CONFIG_DEBUG_PAGEALLOC
> static void slab_kernel_map(struct kmem_cache *cachep, void *objp, int map)
> {
> if (!is_debug_pagealloc_cache(cachep))
> return;

Hmm, I didn't notice earlier, sorry.
The is_debug_pagealloc_cache() above includes a debug_pagealloc_enabled_static()
check, so it should be fine to use
__kernel_map_pages() directly below. Otherwise we generate two static key checks
for the same key needlessly.

>
> - kernel_map_pages(virt_to_page(objp), cachep->size / PAGE_SIZE, map);
> + if (map)
> + debug_pagealloc_map_pages(virt_to_page(objp),
> + cachep->size / PAGE_SIZE);
> + else
> + debug_pagealloc_unmap_pages(virt_to_page(objp),
> + cachep->size / PAGE_SIZE);
> }
>
> -#else
> -static inline void slab_kernel_map(struct kmem_cache *cachep, void *objp,
> - int map) {}
> -
> -#endif
> -
> static void poison_obj(struct kmem_cache *cachep, void *addr, unsigned char val)
> {
> int size = cachep->object_size;
> @@ -2062,7 +2060,7 @@ int __kmem_cache_create(struct kmem_cache *cachep, slab_flags_t flags)
>
> #if DEBUG
> /*
> - * If we're going to use the generic kernel_map_pages()
> + * If we're going to use the generic debug_pagealloc_map_pages()
> * poisoning, then it's going to smash the contents of
> * the redzone and userword anyhow, so switch them off.
> */
>

\
 
 \ /
  Last update: 2020-11-09 12:34    [W:0.046 / U:0.372 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site