lkml.org 
[lkml]   [2019]   [Aug]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    SubjectRe: [RFC PATCH v3 00/16] Core scheduling v3
    Date
    On 8/28/19 9:01 AM, Peter Zijlstra wrote:
    > On Wed, Aug 28, 2019 at 11:30:34AM -0400, Phil Auld wrote:
    >> On Tue, Aug 27, 2019 at 11:50:35PM +0200 Peter Zijlstra wrote:
    >
    >> The current core scheduler implementation, I believe, still has (theoretical?)
    >> holes involving interrupts, once/if those are closed it may be even less
    >> attractive.
    >
    > No; so MDS leaks anything the other sibling (currently) does, this makes
    > _any_ privilidge boundary a synchronization context.
    >
    > Worse still, the exploit doesn't require a VM at all, any other task can
    > get to it.
    >
    > That means you get to sync the siblings on lovely things like system
    > call entry and exit, along with VMM and anything else that one would
    > consider a privilidge boundary. Now, system calls are not rare, they
    > are really quite common in fact. Trying to sync up siblings at the rate
    > of system calls is utter madness.
    >
    > So under MDS, SMT is completely hosed. If you use VMs exclusively, then
    > it _might_ work because a 'pure' host doesn't schedule that often
    > (maybe, same assumption as for L1TF).
    >
    > Now, there have been proposals of moving the privilidge boundary further
    > into the kernel. Just like PTI exposes the entry stack and code to
    > Meltdown, the thinking is, lets expose more. By moving the priv boundary
    > the hope is that we can do lots of common system calls without having to
    > sync up -- lots of details are 'pending'.
    >

    If are willing to consider the idea that we will sync with the sibling
    only if we touch potential user data, then a significant portion of
    syscalls may not need to sync. Yeah, it still sucks because of the
    complexity added to audit all the places in kernel that may touch
    privileged data and require synchronization.

    I did a prototype (without core sched), kernel build slow 2.5%.
    So this use case still seem reasonable.

    A worst case scenario is concurrent SMT FIO write to encrypted file,
    which have a lot of synchronizations due to extended access to privilege
    data by crypto, we slow by 9%.

    Tim

    \
     
     \ /
      Last update: 2019-08-28 18:38    [W:4.765 / U:0.308 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site