lkml.org 
[lkml]   [2015]   [Dec]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH 2/6] mm: Add vm_insert_pfn_prot
On Fri, Dec 11, 2015 at 2:33 PM, Andrew Morton
<akpm@linux-foundation.org> wrote:
> On Thu, 10 Dec 2015 19:21:43 -0800 Andy Lutomirski <luto@kernel.org> wrote:
>
>> The x86 vvar mapping contains pages with differing cacheability
>> flags. This is currently only supported using (io_)remap_pfn_range,
>> but those functions can't be used inside page faults.
>
> Foggy. What does "support" mean here?

We currently have a hack in which every x86 mm has a "vvar" vma that
has a .fault handler that always fails (it's the vm_special_mapping
fault handler backed by an empty pages array). To make everything
work, at mm startup, the vdso code uses remap_pfn_range and
io_remap_pfn_range to poke the pfns into the page tables.

I'd much rather implement this using the new .fault mechanism, and the
canonical way to implement .fault seems to be vm_insert_pfn, and
vm_insert_pfn doesn't allow setting per-page cacheability.
Unfortunately, one of the three x86 vvar pages needs to be uncacheable
because it's a genuine IO page, so I can't use vm_insert_pfn.

I suppose I could just call io_remap_pfn_range from .fault, but I
think that's frowned upon. Admittedly, I wasn't really sure *why*
that's frowned upon. This goes way back to 2007
(e0dc0d8f4a327d033bfb63d43f113d5f31d11b3c) when .fault got fancier.

>
>> Add vm_insert_pfn_prot to support varying cacheability within the
>> same non-COW VMA in a more sane manner.
>
> Here, "support" presumably means "insertion of pfns". Can we spell all
> this out more completely please?

Yes, will fix.

>
>> x86 needs this to avoid a CRIU-breaking and memory-wasting explosion
>> of VMAs when supporting userspace access to the HPET.
>>
>
> OtherwiseAck.

--Andy

--
Andy Lutomirski
AMA Capital Management, LLC


\
 
 \ /
  Last update: 2015-12-12 00:01    [W:0.091 / U:0.100 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site