Messages in this thread | | | Date | Thu, 8 Dec 2011 05:43:03 +0100 | From | Ingo Molnar <> | Subject | Re: [PATCH v4 0/7] x86: BSP or CPU0 online/offline |
| |
* Luck, Tony <tony.luck@intel.com> wrote:
> > The question is, how realistically does this report true CPU > > troubles, statistically? The on-die cache might have the > > highest transistor count, but it's not under nearly the same > > thermal stress as functional units. > > > > If 90% of all hard CPU failures can be predicted that way > > then it's probably useful. If it's only 20%, then not so > > much. > > Intel doesn't release error rates - so I can't help with data > here.
Well, precise data won't be needed - but we need *something* indicative to justify the feature - faith alone won't be enough.
Is there any third party research on this? I remember that Google released hard drive failure stats a few years ago, maybe there's some approximate data about CPU "soft" failure rates. Even anecdotal data and speculation/estimation would be a start - it could be contradicted later on by more precise data, once people start using the "generic CPU hot-unplug" feature. (which this feature should really be named, instead of the 'BSP unplug' name.)
> > Also, it's still all theoretical until there's systems out > > there where the CPU socket is physically hotpluggable. If > > there's such plans in the works then sure, theory becomes > > reality and then it's all useful - and then we can do these > > patches (and more). > > No - physical removal of the cpu is not a requirement for this > to be useful. [...]
Indeed, you are right, i stand corrected there.
Okay, i'm convinced, i guess we can do this.
> [...] > > Physical removal of the cpu is a problem for Linux since > Nehalem (when memory controller moved on-die). Take away the > cpu, and you lose access to the memory connected to that > socket - and we don't have general solutions for memory > removal.
It's possible technically but not the easiest of features - also i suspect Linus would object to the naive breaking of the semi-linear kernel mapping we do today ;-)
But if someone implements that in a sane way, using at least 2MB granular mappings [or maybe ORDER_MAX granular mappings], which keeps 2MB TLBs, and uses a quick hash table for __pa() and __va(), i would definitely take a look at how ugly it ends up being. Our hibernation code already gives us a generic way to quiescence all DMA activity on the system, so most of the building blocks are in place.
Thanks,
Ingo
| |