lkml.org 
[lkml]   [2010]   [Jul]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH 03/18] TSC reset compensation
    On 07/18/2010 04:34 AM, Avi Kivity wrote:
    > On 07/13/2010 05:25 AM, Zachary Amsden wrote:
    >> Attempt to synchronize TSCs which are reset to the same value. In the
    >> case of a reliable hardware TSC, we can just re-use the same offset, but
    >> on non-reliable hardware, we can get closer by adjusting the offset to
    >> match the elapsed time.
    >>
    >> diff --git a/arch/x86/include/asm/kvm_host.h
    >> b/arch/x86/include/asm/kvm_host.h
    >> index 3b4efe2..4b42893 100644
    >> --- a/arch/x86/include/asm/kvm_host.h
    >> +++ b/arch/x86/include/asm/kvm_host.h
    >> @@ -396,6 +396,9 @@ struct kvm_arch {
    >> unsigned long irq_sources_bitmap;
    >> s64 kvmclock_offset;
    >> spinlock_t tsc_write_lock;
    >> + u64 last_tsc_nsec;
    >> + u64 last_tsc_offset;
    >> + u64 last_tsc_write;
    >
    > So that we know what the lock protects, let's have
    >
    > struct kvm_global_tsc {
    > spinlock_t lock;
    > ...
    > } tsc;
    >
    >> @@ -896,10 +896,39 @@ static DEFINE_PER_CPU(unsigned long, cpu_tsc_khz);
    >> void guest_write_tsc(struct kvm_vcpu *vcpu, u64 data)
    >> {
    >> struct kvm *kvm = vcpu->kvm;
    >> - u64 offset;
    >> + u64 offset, ns, elapsed;
    >> + struct timespec ts;
    >>
    >> spin_lock(&kvm->arch.tsc_write_lock);
    >> offset = data - native_read_tsc();
    >> + ktime_get_ts(&ts);
    >> + monotonic_to_bootbased(&ts);
    >> + ns = timespec_to_ns(&ts);
    >> + elapsed = ns - kvm->arch.last_tsc_nsec;
    >> +
    >> + /*
    >> + * Special case: identical write to TSC within 5 seconds of
    >> + * another CPU is interpreted as an attempt to synchronize
    >> + * (the 5 seconds is to accomodate host load / swapping).
    >> + *
    >> + * In that case, for a reliable TSC, we can match TSC offsets,
    >> + * or make a best guest using kernel_ns value.
    >> + */
    >> + if (data == kvm->arch.last_tsc_write&& elapsed< 5ULL *
    >> NSEC_PER_SEC) {
    >> + if (!check_tsc_unstable()) {
    >> + offset = kvm->arch.last_tsc_offset;
    >> + pr_debug("kvm: matched tsc offset for %llu\n", data);
    >> + } else {
    >> + u64 tsc_delta = elapsed * __get_cpu_var(cpu_tsc_khz);
    >> + tsc_delta = tsc_delta / USEC_PER_SEC;
    >> + offset += tsc_delta;
    >> + pr_debug("kvm: adjusted tsc offset by %llu\n", tsc_delta);
    >> + }
    >> + ns = kvm->arch.last_tsc_nsec;
    >> + }
    >> + kvm->arch.last_tsc_nsec = ns;
    >> + kvm->arch.last_tsc_write = data;
    >> + kvm->arch.last_tsc_offset = offset;
    >
    > We'd have a false alarm here during a reset within 5 seconds of boot.
    > Does it matter? Easy to work around by forgetting the state during
    > reset.
    >

    Not forgetting, but ignoring; reset within 5 seconds will not reset TSC,
    which normally is fine. The problem is that one CPU could reset within
    5 seconds and one slightly after. Forgetting during reset is a good
    solution.


    \
     
     \ /
      Last update: 2010-07-19 22:03    [W:7.237 / U:0.076 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site