Messages in this thread | | | From | John Bradford <> | Subject | Re: Are linux-fs's drive-fault-tolerant by concept? | Date | Mon, 21 Apr 2003 11:50:34 +0100 (BST) |
| |
> I say: > we already have the needed information inside every fs, why not use it? > No space wasted, no double information.
Bad block remapping in software would be useful only on relatively obscure devices which don't already do it. I consider it to be counter productive, and a bad thing to do on devices that already do it themselves.
Therefore, it is an ideal candidate for being done in a separate layer.
Your argument seems to be that if a write error is encountered, the simplest thing to do is to try another block, and mark the first one as unusable.
On devices which have a fixed physical area for each logical block, and where no write caching was done, your approach might be the best one.
However, modern operating systems often cache a lot of data to write out when the disk is idle. The filesystem will typically allocate this data to various blocks immediately. A write failiure 30 seconds later would mean that the filesystem then has to change that allocation from the bad block to a new block. It's possible that there might not even be a new block available by that time. On disks which do their own bad block management, if one block re-allocation failed, because there is no space left, it's quite possible that all future re-allocations will fail as well.
On devices which never report back bad blocks to the operating system, the space for the bad block table is wasted.
John. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
| |