lkml.org 
[lkml]   [2017]   [Nov]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH 12/30] x86, kaiser: map GDT into user page tables
On Tue, Nov 21, 2017 at 3:42 PM, Dave Hansen
<dave.hansen@linux.intel.com> wrote:
> On 11/21/2017 03:32 PM, Andy Lutomirski wrote:
>>> To do this, we need to special-case the kernel page table walker to deal
>>> with PTEs only since we can't just grab PMD or PUD flags and stick them
>>> in a PTE. We would only be able to use this path when populating things
>>> that we know are 4k-mapped in the kernel.
>> I'm not sure I'm understanding the issue. We'd promise to map the
>> cpu_entry_area without using large pages, but I'm not sure I know what
>> you're referring to. The only issue I see is that we'd have to be
>> quite careful when tearing down the user tables to avoid freeing the
>> shared part.
>
> It's just that it currently handles large and small pages in the kernel
> mapping that it's copying. If we want to have it just copy the PTE,
> we've got to refactor things a bit to separate out the PTE flags from
> the paddr being targeted, and also make sure we don't munge the flags
> conversion from the large-page entries to 4k PTEs. The PAT and PSE bits
> cause a bit of trouble here.

I'm confused. I mean something like:

unsigned long start = (unsigned long)get_cpu_entry_area(cpu);
for (unsigned long addr = start; addr < start + sizeof(struct
cpu_entry_area); addr += PAGE_SIZE) {
pte_t pte = *pte_offset_k(addr); /* or however you do this */
kaiser_add_mapping(pte_pfn(pte), pte_prot(pte));
}

modulo the huge pile of typos in there that surely exist.

But I still prefer my approach of just sharing the cpu_entry_area pmd
entries between the user and kernel tables.

\
 
 \ /
  Last update: 2017-11-22 01:17    [W:0.061 / U:0.196 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site