lkml.org 
[lkml]   [2010]   [May]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Hardware Error Kernel Mini-Summit
Hi Eric,

> I'm not ready to believe the average person that is running linux
> is too stupid to understand the difference between a hardware
> error and a software error.

Experience disagrees with you (that is not sure about average,
but at least there's a significant portion)

Also again today there are other reasons for it.

>
> > But there's more to it now:
> >
> >> If your system isn't broken correctable errors are rare. People look
> >
> > Actually the more memory you have the more common they are.
> > And the trend is to more and more memory.
>
> The error rate should not be fixed per bit but should be roughly fixed
> per DIMM. If the error rate over time is fixed per bit we are in deep
> trouble.

Error rates of good DIMMs scale roughly with the number of transistors.
It's not the only influence though, but a major one.

> > Really to do anything useful with them you need trends
> > and automatic actions (like predictive page offlining)
>
> Not at all, and I don't have a clue where you start thinking
> predictive page offlining makes the least bit of sense. Broken
> or even weak bits are rarely the common reason for ECC errors.

There are various studies that disagree with you on that.

>
> > A log isn't really a good format for that
>
> A log is a fine format for realizing you have a problem. A

A low steady rate of corrected errors on a large system
is expected. In fact if you look at the memory error log.
of a large system (towards TBs) it nearly always has some
memory related events.

In this case a log is not really useful. What you need
is useful thresholds and a good summary.

> - Errors that occur frequently. That is broken hardware of one time or
> another. I want to know about that so I can schedule down time to replace
> my memory before I get an uncorrected ECC error. Errors of this kind
> are likely happening frequently enough as to impact performance.

Same issue here: if something is truly broken it floods
you with errors.

First this costs a lot of time to process and it does not
actually tell you anything useful because most errors in a flood
are similar.

Basically you don't care if you have 100 or 1000 errors,
and you definitely don't want all the of the errors filling up
your disk and using up your CPU.

Again a threshold with an action is much more useful here.

-Andi
--
ak@linux.intel.com -- Speaking for myself only.


\
 
 \ /
  Last update: 2010-05-19 11:05    [W:0.418 / U:0.072 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site