Messages in this thread | | | From | Neil Brown <> | Date | Tue, 28 Jun 2005 11:02:32 +1000 | Subject | Re: dirty md raid5 slab bio leak |
| |
On Monday June 27, list-lkml@net4u.de wrote: > Moin. > > The machine: > > amd64, 2G mem, 4 SATA disks on SiI 3114 [SATALink/SATARaid] Serial ATA > Controller, configured as md raid5/raid1, using [cfq-scheduler], running > 2.6.12-rc6 (application postgresql) > > The story: > > This morning the machine was very slow, first check shows that the machine > swaps and all disk i/o are very slow. > > Looking further slabtop shows > > Active / Total Objects (% used) : 19821561 / 19828316 (100.0%) > Active / Total Slabs (% used) : 369737 / 369739 (100.0%) > Active / Total Caches (% used) : 80 / 120 (66.7%) > Active / Total Size (% used) : 1415795.50K / 1416586.19K (99.9%) > Minimum / Average / Maximum Object : 0.02K / 0.07K / 128.00K > > OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME > 9865575 9864690 99% 0.02K 43847 225 175388K biovec-1 > 9865254 9864654 99% 0.12K 318234 31 1272936K bio
raid5 never allocates from these slabs, so this cannot be a raid5 problem. raid1 does, and you mention in the intro that raid1 might be involved, but there are now more details... Could you say a little bit more about your setup.. How are each of raid1 and raid5 used. Was there any error on a drive involved in raid1?
I just checked my test machine (running 2.6.12-rc3-mm3) and it had a really large number of bio and biovec-1 in use too (and it's been sitting fairly idle for several days).
I've quickly reviewed the raid1 code and I cannot see a bio leak (though that doesn't mean there isn't one..)
If anyone else has a large 'bio' slab, please report the configuration (kernel, is md in use, etc).
NeilBrown - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
| |