lkml.org 
[lkml]   [2004]   [May]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: [PATCH/RFC] Lustre VFS patch
Date
Hi Lars, 

> -----Original Message-----
> From: Lars Marowsky-Bree [mailto:lmb@suse.de]
> Sent: Tuesday, May 25, 2004 6:53 PM
> To: braam; 'Jens Axboe'
> Cc: torvalds@osdl.org; akpm@osdl.org;
> linux-kernel@vger.kernel.org; 'Phil Schwan'
> Subject: Re: [PATCH/RFC] Lustre VFS patch
>
> On 2004-05-25T16:21:29,
> braam <braam@clusterfs.com> said:
>
> > I think do answer your question: ...
> > > > If we were to return errors, (which, I agree, _seems_ much more
> > > > sane, and we _did_ try that for a while!) then there is a good
> > > > chance, namely immediately when something is flushed to
> disk, that
> > > > the system will detect the errors and not continue to execute
> > > > transactions making consistent testing of our replay mechanisms
> > > > impossible.
> > So: we can use the flags, but we cannot return the errors.
>
> Maybe I am missing something here, but is this testing not
> somewhat unrealistic then? In the general case, the system in
> production _will_ report an error and not silently throw away
> the writes.

I would not say "unrealistic": It is a harsh way to systematically and
consistently generate failure patterns that are otherwise subject to winning
races with the flushing daemons.

Semantically what we add a new flag:
IGNORE_IO_ERRORS
The ioctl in our patch has the same effect as setting IGNORE_IO_ERRORS |
RDONLY

Is it really terrible to have that flag?

> > Some people find it very convenient to have this available,
> but if the
> > opinion is that it is better to let development teams
> manage their own
> > testing infrastructure that is acceptable to me.
>
> Yes, this is very "convenient" and actually, "some people"
> think it is absolutely mandatory that the kernel which is
> used for production sites is 1:1 bit-wise identical than the
> one used for load & stress testing, otherwise the testing is
> void to a certain degree...
>
> Maybe you could fix this in the test harness / Lustre itself
> instead and silently discard the writes internally if told so
> via an (internal) option, instead of needing a change deeper
> down in the IO layer, or use a DM target which can give you
> all the failure scenarios you need?
>
> In particular the last one - a fault-injection DM target -
> seems like a very valuable tool for testing in general, but
> the Lustre-internal approach may be easier in the long run.

Yes, a (virtual) block device can do this easily. If Jens can accept the
new flag that is easiest, if not we will hack up a DM target in due course.

- Peter -

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 14:03    [W:0.102 / U:0.020 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site