lkml.org 
[lkml]   [1997]   [Oct]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PROPOSAL] Coping with random bit errors
H. Peter Anvin writes:
> By author: Richard Gooch <rgooch@atnf.CSIRO.AU>
> >=20
> > While the above scheme is not as robust as proper ECC memory, it has
> > the distinct advantage of being cheap (free:-) and should provide som=
> e
> > level of protection against random bit errors. I shudder to think wha=
> t
> > other bit errors have crept into my source tree which don't prevent
> > compiling :-(
> > Anyway, I'd like to get some reaction from those who know more about
> > the page cache implementation as to what they think of this idea?
> >=20
>
> Hardly free... you're spending memory and *lots* of CPU cycles, which
> really drags down your performance/price ratio :(

I did say to do it only if the system is otherwise idle: say if no-one
has used the CPU for two time slices, then check a single page. When
looking for random bit errors, there is no need to race through
memory, since they don't happen very often. This would not noticably
impact performance.

> The main problem with it, from a technical standpoint, is that unlike
> ECC all you know is that a page was corrupted, so you have to throw it
> out. If it was dirty, or in use, what do you do?

Nothing. This scheme is *only* for clean pages in the page cache (i.e.
those that should be identical to what on disc). For people with lots
of RAM, most of which is taken up by page cache, this scheme should
work quite well.

Regards,

Richard....

\
 
 \ /
  Last update: 2005-03-22 13:40    [W:0.063 / U:0.516 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site