lkml.org 
[lkml]   [2010]   [May]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: Hardware Error Kernel Mini-Summit
    On Wed, May 19, 2010 at 11:03:24AM +0200, Andi Kleen wrote:
    > Hi Eric,
    >
    > > I'm not ready to believe the average person that is running linux
    > > is too stupid to understand the difference between a hardware
    > > error and a software error.
    >
    > Experience disagrees with you (that is not sure about average,
    > but at least there's a significant portion)
    >
    > Also again today there are other reasons for it.

    I agree with Andi. While there are a wire range of users, the
    vast majority know little about the hardware they are running
    on. Even in commercial settings, where users/admins are better
    educated, there is little time to do detailed error analysis.

    The more errors are detected/analyzed/corrected/recovered, the
    better it is for everyone.


    > > > Really to do anything useful with them you need trends
    > > > and automatic actions (like predictive page offlining)
    > >
    > > Not at all, and I don't have a clue where you start thinking
    > > predictive page offlining makes the least bit of sense. Broken
    > > or even weak bits are rarely the common reason for ECC errors.
    >
    > There are various studies that disagree with you on that.

    Having the infrastructure to automatically off-line pages
    is a good thing. The details of where to set the predictive
    threshold likely will be hardware specific (different DIMM
    types failing at different rates). It needs to be adjustable.

    > > > A log isn't really a good format for that
    > >
    > > A log is a fine format for realizing you have a problem. A
    >
    > A low steady rate of corrected errors on a large system
    > is expected. In fact if you look at the memory error log.
    > of a large system (towards TBs) it nearly always has some
    > memory related events.

    Yes, there are certainly examples of that.

    > In this case a log is not really useful. What you need
    > is useful thresholds and a good summary.

    The larger the system the more important a good summary is.

    > > - Errors that occur frequently. That is broken hardware of one time or
    > > another. I want to know about that so I can schedule down time to replace
    > > my memory before I get an uncorrected ECC error. Errors of this kind
    > > are likely happening frequently enough as to impact performance.
    >
    > Same issue here: if something is truly broken it floods
    > you with errors.
    >
    > First this costs a lot of time to process and it does not
    > actually tell you anything useful because most errors in a flood
    > are similar.
    >
    > Basically you don't care if you have 100 or 1000 errors,
    > and you definitely don't want all the of the errors filling up
    > your disk and using up your CPU.
    >
    > Again a threshold with an action is much more useful here.

    Yes, good points.

    --
    Russ Anderson, OS RAS/Partitioning Project Lead
    SGI - Silicon Graphics Inc rja@sgi.com


    \
     
     \ /
      Last update: 2010-05-24 18:23    [W:4.197 / U:0.412 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site