lkml.org 
[lkml]   [2010]   [Jun]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Aerospace and linux
On Thu, Jun 10, 2010 at 12:38:10PM -0600, Brian Gordon wrote:
> > It's also a serious consideration for standard servers.
> Yes. Good point.
>
> > On server class systems with ECC memory hardware does that.
>
> > Normally server class hardware handles this and the kernel then reports
> > memory errors (e.g. through mcelog or through EDAC)
>
> Agreed. EDAC is a good and sane solution and most companies do this.

Sorry, but you mean ECC?

IMHO EDAC Is not a good solution for error reporting (however I'm biased
because I work on a better one)

> Some do not due to naivity or cost reduction. EDAC doesn't cover
> processor registers and I have fairly good solutions on how to deal
> with that in tiny "home-grown" tasking systems.

mcelog covers OS visible processor registers on x86 systems.

If your hardware doesn't support it it's hard for the general
case, although special cases are always possible.
>
> > Lower end systems which are optimized for cost generally ignore the
> > problem though and any flipped bit in memory will result
> > in a crash (if you're lucky) or silent data corruption (if you're unlucky)
>
> Right! And this is the area that I am interested in. Some people
> insist on lowering the cost of the hardware without considering these
> issues. One thing I want to do is to be as diligent as possible (even
> in these low cost situations) and do the best job I can in spite of
> the low cost hardware.

AFAIK there's no support for this in a standard Linux kernel.

That is some architectures do scrubbing in software,
but the basic ECC implementation is still hardware.

In general I suspect you'll need some application specific
strategy, if your hardware doesn't help you in this.

Having good hardware definitely helps, software is generally
not happy if it cannot trust its memory enough.

It's a bit like a human with no reliable air supply.

That is the existing memory error handling mechanisms (like hwpoison)
assume events are reliably detected and are relatively rare.

>
> So, some pages of RAM are going to be read-only and the data in those
> pages came from some source (file system?). Can anyone describe a
> high level strategy to occasionaly provide some coverage of this data?

Just for block data there's some support for checksumming
e.g, block integrity (needs special support in the device)
or file systems (e.g. btrfs)

However they all normally assume memory is reliable and
are more focussed on errors coming from storage.

>
> So far I have thought about page descriptors adding an MD5 hash
> whenever they are read-only and first being "loaded/mapped?" and then
> a background daemon could occasionaly verify.

In theory btrfs or block integrity could be probably extended
to regularly re check page cache. It would not be trivial

But to really catch errors before use you would need to recheck on
every access, and that's hard (or rather extremly slow) in some cases
(e.g. mmap)

And this still wouldn't help with r/w memory. Normally on most
workloads r/o (that is clean) memory is the only a small fraction of the
active memory.

-Andi

--
ak@linux.intel.com -- Speaking for myself only.


\
 
 \ /
  Last update: 2010-06-10 20:51    [W:0.053 / U:1.324 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site