lkml.org 
[lkml]   [2017]   [Aug]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [tip:x86/platform] x86/hyper-v: Use hypercall for remote TLB flush
From
Date
On 11/08/17 14:35, Peter Zijlstra wrote:
> On Fri, Aug 11, 2017 at 02:22:25PM +0200, Juergen Gross wrote:
>> Wait - the TLB can be cleared at any time, as Andrew was pointing out.
>> No cpu can rely on an address being accessible just because IF is being
>> cleared. All that matters is the existing and valid page table entry.
>>
>> So clearing IF on a cpu isn't meant to secure the TLB from being
>> cleared, but just to avoid interrupts (as the name of the flag is
>> suggesting).
>
> Yes, but by holding off the TLB invalidate IPI, we hold off the freeing
> of the concurrently unhooked page-table.
>
>> In the Xen case the hypervisor does the following:
>>
>> - it checks whether any of the vcpus specified in the cpumask of the
>> flush request is running on any physical cpu
>> - if any running vcpu is found an IPI will be sent to the physical cpu
>> and the hypervisor will do the TLB flush there
>
> And this will preempt a vcpu which could have IF cleared, right?
>
>> - any vcpu addressed by the flush and not running will be flagged to
>> flush its TLB when being scheduled the next time
>>
>> This ensures no TLB entry to be flushed can be used after return of
>> xen_flush_tlb_others().
>
> But that is not a sufficient guarantee. We need the IF to hold off the
> TLB invalidate and thereby hold off the freeing of our page-table pages.

Aah, okay. Now I understand the problem. The TLB isn't the issue but the
IPI is serving two purposes here: TLB flushing (which is allowed to
happen at any time) and serialization regarding access to critical pages
(which seems to be broken in the Xen case as you suggest).

Juergen

>

\
 
 \ /
  Last update: 2017-08-11 14:47    [W:0.091 / U:0.372 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site