lkml.org 
[lkml]   [2015]   [Nov]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH v2 03/11] pmem: enable REQ_FUA/REQ_FLUSH handling
    On Mon 16-11-15 14:37:14, Jan Kara wrote:
    > On Fri 13-11-15 18:32:40, Dan Williams wrote:
    > > On Fri, Nov 13, 2015 at 4:43 PM, Andreas Dilger <adilger@dilger.ca> wrote:
    > > > On Nov 13, 2015, at 5:20 PM, Dan Williams <dan.j.williams@intel.com> wrote:
    > > >>
    > > >> On Fri, Nov 13, 2015 at 4:06 PM, Ross Zwisler
    > > >> <ross.zwisler@linux.intel.com> wrote:
    > > >>> Currently the PMEM driver doesn't accept REQ_FLUSH or REQ_FUA bios. These
    > > >>> are sent down via blkdev_issue_flush() in response to a fsync() or msync()
    > > >>> and are used by filesystems to order their metadata, among other things.
    > > >>>
    > > >>> When we get an msync() or fsync() it is the responsibility of the DAX code
    > > >>> to flush all dirty pages to media. The PMEM driver then just has issue a
    > > >>> wmb_pmem() in response to the REQ_FLUSH to ensure that before we return all
    > > >>> the flushed data has been durably stored on the media.
    > > >>>
    > > >>> Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
    > > >>
    > > >> Hmm, I'm not seeing why we need this patch. If the actual flushing of
    > > >> the cache is done by the core why does the driver need support
    > > >> REQ_FLUSH? Especially since it's just a couple instructions. REQ_FUA
    > > >> only makes sense if individual writes can bypass the "drive" cache,
    > > >> but no I/O submitted to the driver proper is ever cached we always
    > > >> flush it through to media.
    > > >
    > > > If the upper level filesystem gets an error when submitting a flush
    > > > request, then it assumes the underlying hardware is broken and cannot
    > > > be as aggressive in IO submission, but instead has to wait for in-flight
    > > > IO to complete.
    > >
    > > Upper level filesystems won't get errors when the driver does not
    > > support flush. Those requests are ended cleanly in
    > > generic_make_request_checks(). Yes, the fs still needs to wait for
    > > outstanding I/O to complete but in the case of pmem all I/O is
    > > synchronous. There's never anything to await when flushing at the
    > > pmem driver level.
    > >
    > > > Since FUA/FLUSH is basically a no-op for pmem devices,
    > > > it doesn't make sense _not_ to support this functionality.
    > >
    > > Seems to be a nop either way. Given that DAX may lead to dirty data
    > > pending to the device in the cpu cache that a REQ_FLUSH request will
    > > not touch, its better to leave it all to the mm core to handle. I.e.
    > > it doesn't make sense to call the driver just for two instructions
    > > (sfence + pcommit) when the mm core is taking on the cache flushing.
    > > Either handle it all in the mm or the driver, not a mixture.
    >
    > So I think REQ_FLUSH requests *must* end up doing sfence + pcommit because
    > e.g. journal writes going through block layer or writes done through
    > dax_do_io() must be on permanent storage once REQ_FLUSH request finishes
    > and the way driver does IO doesn't guarantee this, does it?

    Hum, and looking into how dax_do_io() works and what drivers/nvdimm/pmem.c
    does, I'm indeed wrong because they both do wmb_pmem() after each write
    which seems to include sfence + pcommit. Sorry for confusion.

    But a question: Won't it be better to do sfence + pcommit only in response
    to REQ_FLUSH request and don't do it after each write? I'm not sure how
    expensive these instructions are but in theory it could be a performance
    win, couldn't it? For filesystems this is enough wrt persistency
    guarantees...

    Honza
    --
    Jan Kara <jack@suse.com>
    SUSE Labs, CR


    \
     
     \ /
      Last update: 2015-11-16 15:21    [W:4.572 / U:0.068 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site