[lkml]   [2002]   [May]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: [PATCH] remove 2TB block device limit
    On Friday 17 May 2002 00:54, Andreas Dilger wrote:
    > A minor question is whether to cap it at 65536 blocks/group or 65528?
    > (The number of blocks per group must be a multiple of 8).
    > The current layout is such that you will _always_ have at least 3
    > blocks in use for each group. However, if we implement Ted's
    > "metagroup" layout (which puts all of a group's bitmaps/itable blocks
    > in the first group of its block of group descriptors) then there could
    > be cases where a group has no blocks in use, and the free count will
    > overflow.
    > Having the upper limit at 65536 is aesthetically pleasing, and it aligns
    > nicely with LVM (which allocates chunks in power-of-two sizes), but may
    > preclude changing such a filesystem to the metagroup layout without a
    > larger effort on the resizer's part. I'll go with 65528 I guess.

    I like 65536 as well, but it's easy to relax your slightly lower limit
    later if the metagroup design changes, and would not require a compatibility
    flag, while tightening it would be a major pain.

    > Note that going to a metagroup layout would also grow the distance
    > between the itable and possible blocks quadratically (the number of
    > group descriptors that fit into a block also grows with blocksize),
    > but at least it is not cubic growth. That said, the metagroup layout
    > is probably only useful for cases where you _know_ you want huge files
    > (in the multi-GB range) and locality of blocks to the single inode block
    > is irrelevant.

    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to
    More majordomo info at
    Please read the FAQ at

     \ /
      Last update: 2005-03-22 13:26    [W:0.023 / U:3.396 seconds]
    ©2003-2017 Jasper Spaans. hosted at Digital OceanAdvertise on this site