Messages in this thread | ![/](/images/icornerl.gif) | | Date | Wed, 10 Jan 2024 17:58:15 +0000 | From | Catalin Marinas <> | Subject | Re: [PATCH v3 0/2] iommu/iova: Make the rcache depot properly flexible |
| |
On Wed, Jan 10, 2024 at 12:48:06PM +0000, Robin Murphy wrote: > On 2024-01-09 5:21 pm, Ido Schimmel wrote: > > On Mon, Jan 08, 2024 at 05:35:26PM +0000, Robin Murphy wrote: > > > Hmm, we've got what looks to be a set of magazines forming a plausible depot > > > list (or at least the tail end of one): > > > > > > ffff8881411f9000 -> ffff8881261c1000 > > > > > > ffff8881261c1000 -> ffff88812be26400 > > > > > > ffff88812be26400 -> ffff8188392ec000 > > > > > > ffff8188392ec000 -> ffff8881a5301000 > > > > > > ffff8881a5301000 -> NULL > > > > > > which I guess has somehow become detached from its rcache->depot without > > > being freed properly? However I'm struggling to see any conceivable way that > > > could happen which wouldn't already be more severely broken in other ways as > > > well (i.e. either general memory corruption or someone somehow still trying > > > to use the IOVA domain while it's being torn down). > > > > The machine is running a debug kernel that among other things has KASAN > > enabled, but there are no traces in the kernel log so there is no memory > > corruption that I'm aware of. > > > > > Out of curiosity, does reverting just patch #2 alone make a difference? > > > > Will try and let you know. > > > > > And is your workload doing anything "interesting" in relation to IOVA > > > domain lifetimes, like creating and destroying SR-IOV virtual > > > functions, changing IOMMU domain types via sysfs, or using that > > > horrible vdpa thing, or are you seeing this purely from regular driver > > > DMA API usage? > > > > The machine is running networking related tests, but it is not using > > SR-IOV, VMs or VDPA so there shouldn't be anything "interesting" as far > > as IOMMU is concerned. > > > > The two networking drivers on the machine are "igb" for the management > > port and "mlxsw" for the data ports (the machine is a physical switch). > > I believe the DMA API usage in the latter is quite basic and I don't > > recall any DMA related problems with this driver since it was first > > accepted upstream in 2015. > > Thanks for the clarifications, that seems to rule out all the most > confusingly impossible scenarios, at least. > > The best explanation I've managed to come up with is a false-positive race > dependent on the order in which kmemleak scans the relevant objects. Say we > have the list as depot -> A -> B -> C; the rcache object is scanned and sees > the pointer to magazine A, but then A is popped *before* kmemleak scans it, > such that when it is then scanned, its "next" pointer has already been > wiped, thus kmemleak never observes any reference to B, so it appears that B > and (transitively) C are "leaked".
Transient false positives are possible, especially as the code doesn't use a double-linked list (for the latter, kmemleak does checksumming and detects the prev/next change, defers the reporting until the object becomes stable). That said, if a new scan is forced (echo scan > /sys/kernel/debug/kmemleak), are the same objects still listed as leaks? If yes, they may not be transient.
If it is indeed transient, I think a better fix than kmemleak_not_leak() is to add a new API, something like kmemleak_mark_transient() which resets the checksum, skips the object reporting for one scan.
-- Catalin
| ![\](/images/icornerr.gif) |