lkml.org 
[lkml]   [2019]   [May]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH] usercopy: Remove HARDENED_USERCOPY_PAGESPAN
    On Sat, May 11, 2019 at 09:11:42PM -0700, Matthew Wilcox wrote:
    > On Sat, May 11, 2019 at 05:03:08PM -0700, Kees Cook wrote:
    > > On Fri, May 10, 2019 at 08:41:43PM -0400, Laura Abbott wrote:
    > > > On 5/10/19 3:43 PM, Kees Cook wrote:
    > > > > This feature continues to cause more problems than it solves[1]. Its
    > > > > intention was to check the bounds of page-allocator allocations by using
    > > > > __GFP_COMP, for which we would need to find all missing __GFP_COMP
    > > > > markings. This work has been on hold and there is an argument[2]
    > > > > that such markings are not even the correct signal for checking for
    > > > > same-allocation pages. Instead of depending on BROKEN, this just removes
    > > > > it entirely. It can be trivially reverted if/when a better solution for
    > > > > tracking page allocator sizes is found.
    > > > >
    > > > > [1] https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg37479.html
    > > > > [2] https://lkml.kernel.org/r/20190415022412.GA29714@bombadil.infradead.org
    > > >
    > > > I agree the page spanning is broken but is it worth keeping the
    > > > checks against __rodata __bss etc.?
    > >
    > > They're all just white-listing later checks (except RODATA which is
    > > doing a cheap RO test which is redundant on any architecture with actual
    > > rodata...) so they don't have any value in staying without the rest of
    > > the page allocator logic.
    > >
    > > > > - /* Is the object wholly within one base page? */
    > > > > - if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
    > > > > - ((unsigned long)end & (unsigned long)PAGE_MASK)))
    > > > > - return;
    > > > > -
    > > > > - /* Allow if fully inside the same compound (__GFP_COMP) page. */
    > > > > - endpage = virt_to_head_page(end);
    > > > > - if (likely(endpage == page))
    > > > > - return;
    > >
    > > We _could_ keep the mixed CMA/reserved/neither check if we really wanted
    > > to, but that's such a corner case of a corner case, I'm not sure it's
    > > worth doing the virt_to_head_page() across the whole span to figure
    > > it out.
    >
    > I'd delete that first check, because it's a subset of the second check,

    It seemed easier to short-circuit with a math test before doing the slightly more expensive virt_to_head_page(end) call. Do you think that's sensible?

    >
    > /* Is the object wholly within a single (base or compound) page? */
    > endpage = virt_to_head_page(end);
    > if (likely(endpage == page))
    > return;
    >
    > /*
    > * If the start and end are more than MAX_ORDER apart, they must
    > * be from separate allocations
    > */
    > if (n >= (PAGE_SIZE << MAX_ORDER))
    > usercopy_abort("spans too many pages", NULL, to_user, 0, n);
    >
    > /*
    > * If neither page is compound, we can't tell if the object is
    > * within a single allocation or not
    > */
    > if (!PageCompound(page) && !PageCompound(endpage))
    > return;
    >
    > > I really wish we had size of allocation reliably held somewhere. We'll
    > > need it for doing memory tagging of the page allocator too...
    >
    > I think we'll need to store those allocations in a separate data structure
    > on the side. As far as the rest of the kernel is concerned, those struct
    > pages belong to them once the page allocator has given them.

    Okay, let me work up a page-type refactoring while allocation size can
    stay back-burnered.

    --
    Kees Cook

    \
     
     \ /
      Last update: 2019-05-13 23:33    [W:3.371 / U:0.020 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site