lkml.org 
[lkml]   [2019]   [Sep]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: [PATCH] mm: avoid slub allocation while holding list lock
    From
    Date
    Yu Zhao wrote:
    > I think we can safely assume PAGE_SIZE is unsigned long aligned and
    > page->objects is non-zero. But if you don't feel comfortable with these
    > assumptions, I'd be happy to ensure them explicitly.

    I know PAGE_SIZE is unsigned long aligned. If someone by chance happens to
    change from "dynamic allocation" to "on stack", get_order() will no longer
    be called and the bug will show up.

    I don't know whether __get_free_page(GFP_ATOMIC) can temporarily consume more
    than 4096 bytes, but if it can, we might want to avoid "dynamic allocation".

    By the way, if "struct kmem_cache_node" is object which won't have many thousands
    of instances, can't we embed that buffer into "struct kmem_cache_node" because
    max size of that buffer is only 4096 bytes?

    \
     
     \ /
      Last update: 2019-09-10 03:43    [W:2.435 / U:0.024 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site