lkml.org 
[lkml]   [2013]   [Aug]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [RFC v3 0/5] Transparent on-demand struct page initialization embedded in the buddy allocator

    * Nathan Zimmer <nzimmer@sgi.com> wrote:

    > We are still restricting ourselves ourselves to 2MiB initialization.
    > This was initially to keep the patch set a little smaller and more
    > clear. However given how well it is currently performing I don't see a
    > how much better it could be with to 2GiB chunks.
    >
    > As far as extra overhead. We incur an extra function call to
    > ensure_page_is_initialized but that is only really expensive when we
    > find uninitialized pages, otherwise it is a flag check once every
    > PTRS_PER_PMD. [...]

    Mind expanding on this in more detail?

    The main fastpath overhead we are really interested in is the 'memory is
    already fully ininialized and we reallocate a second time' case - i.e. the
    *second* (and subsequent), post-initialization allocation of any page
    range.

    Those allocations are the ones that matter most: they will occur again and
    again, for the lifetime of the booted up system.

    What extra overhead is there in that case? Only a flag check that is
    merged into an existing flag check (in free_pages_check()) and thus is
    essentially zero overhead? Or is it more involved - if yes, why?

    One would naively think that nothing but the flags check is needed in this
    case: if all 512 pages in an aligned 2MB block is fully initialized, and
    marked as initialized in all the 512 page heads, then no other runtime
    check will be needed in the future.

    Thanks,

    Ingo


    \
     
     \ /
      Last update: 2013-08-13 13:21    [W:5.879 / U:0.000 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site