lkml.org 
[lkml]   [2016]   [Sep]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v3 0/4] implement vcpu preempted check
> > > > Please consider s390 and (x86/arm) KVM. Once we have a few, more can
> > > > follow later, but I think its important to not only have PPC support for
> > > > this.
> > >
> > > Actually the s390 preemted check via sigp sense running is available for
> > > all hypervisors (z/VM, LPAR and KVM) which implies everywhere as you can
> > > no longer buy s390 systems without LPAR.
> > >
> > > As Heiko already pointed out we could simply use a small inline function
> > > that calls cpu_is_preempted from arch/s390/lib/spinlock (or
> > > smp_vcpu_scheduled from smp.c)
> >
> > Sure, and I had vague memories of Heiko's email. This patch set however
> > completely fails to do that trivial hooking up.
>
> sorry for that.
> I will try to work it out on x86.

x86 has no hypervisor support, and I'd like to understand the desired
semantics first, so I don't think it should block this series. In
particular, there are at least the following choices:

1) exit to userspace (5-10.000 clock cycles best case) counts as
lock holder preemption

2) any time the vCPU thread not running counts as lock holder
preemption

To implement the latter you'd need a hypercall or MSR (at least as
a slow path), because the KVM preempt notifier is only active
during the KVM_RUN ioctl.

Paolo

\
 
 \ /
  Last update: 2016-09-30 09:01    [W:0.062 / U:0.428 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site