Messages in this thread |  | | Date | Thu, 30 May 1996 01:41:41 -0700 | From | "Leonard N. Zubkoff" <> | Subject | Re: pre2.0.9 (was Re: CD-ROM Access Crashes) |
| |
Date: Thu, 30 May 1996 11:25:28 +0300 (EET DST) From: Linus Torvalds <torvalds@cs.Helsinki.FI>
On Wed, 29 May 1996, Leonard N. Zubkoff wrote:
> I have verified that dd'ing a CD-R no longer kills my system nor does > reading a bad block. I was able to read all the good files on my test CD, > with the bad ones getting I/O errors. It took 1.5 hours to read it all, > but there were no resets and the system worked fine both during and after > the test. However, it does look like there are repeated requests for the > same block, as in this excerpt:
If you just "dd" the raw device, you'll be using the old buffer cache for the blocks. Or did you dd the files from a mounted CD?
I know. Both.
The "dd" command was from the raw device using a CD-R. That was one bug which was reported.
The "reading all the files" was from a mounted CD. That uses the page cache. This was the other bug I discovered when a bad spot is hit. My test CD has large black filled circles made with a Sharpie pen.
Anyway, in both cases it's entirely ok to get multiple reads for bad blocks. In fact, the page cache _always_ tries to re-read a block at least twice - it re-tries the operation that failed before it returns an error message.
OK.
[snip snip]
> What's definitely not implemented as yet is for a SCSI command that fails > with a MEDIUM ERROR to be processed as a partial success and a partial > failure. The entire command is treated as having failed.
This is bad for performance, and it can result in strange behaviour (if the IO request contained requests from two different file reads that were merged at the IO level they both fail even though the error was potentially in just one of the file). However, if there are IO errors you shouldn't really consider your filesystem reliable anyway, so I don't think this is critical (and the re-try might actually sort this case out correctly too).
Agreed. It's been this way forever. You may recall that Andries was working on a fix and sent some code over a month ago, but never completed it. There were several flaws in it dealing with bounce buffers and different sector sizes, and I am working on a correct version.
> In addition, it will still signal > an I/O error when a bad sector is encountered, even if we're really at the > logical end of the CD-R.
Not really a problem, except if the read-ahead code then results in part of the _good_ sectors also being marked bad (due to the previous code). We should probably disable read-ahead in the old buffer cache in this case (or just fix the nontrivial problem with partial failures).
You must have missed some of the analysis I sent yesterday. The underlying problem with reading raw device CD-Rs (and perhaps some CD-ROMs as well) is that we base how much to read on the capacity of the device as returned by a READ CAPACITY command. Unfortunately, it appears that this value is imprecise; it may report a value up to 75 sectors greater than the last readable sector. Therefore, we need to correctly handle the last readable sector, and then translate a MEDIUM ERROR into an end of file. I'm working on it.
Leonard
|  |