lkml.org 
[lkml]   [2015]   [Oct]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH 4.2 059/258] pmem: add proper fencing to pmem_rw_page()
Date
4.2-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Ross Zwisler <ross.zwisler@linux.intel.com>

commit ba8fe0f85e15d047686caf8a42463b592c63c98c upstream.

pmem_rw_page() needs to call wmb_pmem() on writes to make sure that the
newly written data is durable. This flow was added to pmem_rw_bytes()
and pmem_make_request() with this commit:

commit 61031952f4c8 ("arch, x86: pmem api for ensuring durability of
persistent memory updates")

...the pmem_rw_page() path was missed.

Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
drivers/nvdimm/pmem.c | 2 ++
1 file changed, 2 insertions(+)

--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -86,6 +86,8 @@ static int pmem_rw_page(struct block_dev
struct pmem_device *pmem = bdev->bd_disk->private_data;

pmem_do_bvec(pmem, page, PAGE_CACHE_SIZE, 0, rw, sector);
+ if (rw & WRITE)
+ wmb_pmem();
page_endio(page, rw & WRITE, 0);

return 0;



\
 
 \ /
  Last update: 2015-10-18 07:21    [W:0.076 / U:0.080 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site