lkml.org 
[lkml]   [2011]   [Aug]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2] x86: Reduce clock calibration time during slave cpu startup

* Matthew Garrett <mjg@redhat.com> wrote:

> On Fri, Aug 05, 2011 at 11:38:36PM +0200, Ingo Molnar wrote:
>
> > Well, it still uses heuristics: it assumes frequency is the same
> > when the cpuid data tells us that two CPUs are on the same
> > socket, right?
>
> If we only assume that when we have a constant TSC then it's a
> pretty safe assumption - the delay loop will be calibrated against
> the TSC, and the TSC will be constant across the package regardless
> of what frequency the cores are actually running at.

The delay loop might be calibrated against the TSC, but the amount of
real delay we get when we loop 100,000 times will be frequency
dependent.

What we probably want is the most conservative udelay calibration:
have a lpj value measured on the highest possible frequency - this
way hardware components can never be overclocked by a driver.

Or does udelay() scale with the current frequency of the CPU?

Thanks,

Ingo


\
 
 \ /
  Last update: 2011-08-09 17:09    [W:0.226 / U:0.276 seconds]
©2003-2014 Jasper Spaans. Advertise on this site