Messages in this thread | | | Date | Mon, 2 Nov 2015 23:24:42 -0800 | Subject | Re: [PATCH v3 02/15] dax: increase granularity of dax_clear_blocks() operations | From | Dan Williams <> |
| |
On Mon, Nov 2, 2015 at 9:52 PM, Dave Chinner <david@fromorbit.com> wrote: > On Mon, Nov 02, 2015 at 09:31:11PM -0800, Dan Williams wrote: >> On Mon, Nov 2, 2015 at 8:48 PM, Dave Chinner <david@fromorbit.com> wrote: >> > On Mon, Nov 02, 2015 at 07:27:26PM -0800, Dan Williams wrote: >> >> On Mon, Nov 2, 2015 at 4:51 PM, Dave Chinner <david@fromorbit.com> wrote: >> >> > On Sun, Nov 01, 2015 at 11:29:53PM -0500, Dan Williams wrote: >> >> > The zeroing (and the data, for that matter) doesn't need to be >> >> > committed to persistent store until the allocation is written and >> >> > committed to the journal - that will happen with a REQ_FLUSH|REQ_FUA >> >> > write, so it makes sense to deploy the big hammer and delay the >> >> > blocking CPU cache flushes until the last possible moment in cases >> >> > like this. >> >> >> >> In pmem terms that would be a non-temporal memset plus a delayed >> >> wmb_pmem at REQ_FLUSH time. Better to write around the cache than >> >> loop over the dirty-data issuing flushes after the fact. We'll bump >> >> the priority of the non-temporal memset implementation. >> > >> > Why is it better to do two synchronous physical writes to memory >> > within a couple of microseconds of CPU time rather than writing them >> > through the cache and, in most cases, only doing one physical write >> > to memory in a separate context that expects to wait for a flush >> > to complete? >> >> With a switch to non-temporal writes they wouldn't be synchronous, >> although it's doubtful that the subsequent writes after zeroing would >> also hit the store buffer. >> >> If we had a method to flush by physical-cache-way rather than a >> virtual address then it would indeed be better to save up for one >> final flush, but when we need to resort to looping through all the >> virtual addresses that might have touched it gets expensive. > > msync() is for flushing userspace mmap ranges addresses back to > physical memory. fsync() is for flushing kernel addresses (i.e. as > returned by bdev_direct_access()) back to physical addresses. > msync() calls ->fsync() as part of it's operation, fsync() does not > care about whether mmap has been sync'd first or not. > > i.e. we don't care about random dirty userspace virtual mappings in > fsync() - if you have them then you need to call msync() first. So > we shouldn't ever be having to walk virtual addresses in fsync - > just the kaddr returned by bdev_direct_access() is all that fsync > needs to flush... >
Neither Ross' solution nor mine use userspace addresses. Which comment of mine were you reacting to?
| |