lkml.org 
[lkml]   [2022]   [Feb]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v10 3/9] pagemap,pmem: Introduce ->memory_failure()
On Thu, Jan 27, 2022 at 4:41 AM Shiyang Ruan <ruansy.fnst@fujitsu.com> wrote:
>
> When memory-failure occurs, we call this function which is implemented
> by each kind of devices. For the fsdax case, pmem device driver
> implements it. Pmem device driver will find out the filesystem in which
> the corrupted page located in.
>
> With dax_holder notify support, we are able to notify the memory failure
> from pmem driver to upper layers. If there is something not support in
> the notify routine, memory_failure will fall back to the generic hanlder.
>
> Signed-off-by: Shiyang Ruan <ruansy.fnst@fujitsu.com>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> ---
> drivers/nvdimm/pmem.c | 16 ++++++++++++++++
> include/linux/memremap.h | 12 ++++++++++++
> mm/memory-failure.c | 14 ++++++++++++++
> 3 files changed, 42 insertions(+)
>
> diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
> index 58d95242a836..0a6e8698d086 100644
> --- a/drivers/nvdimm/pmem.c
> +++ b/drivers/nvdimm/pmem.c
> @@ -366,6 +366,20 @@ static void pmem_release_disk(void *__pmem)
> blk_cleanup_disk(pmem->disk);
> }
>
> +static int pmem_pagemap_memory_failure(struct dev_pagemap *pgmap,
> + unsigned long pfn, u64 len, int mf_flags)
> +{
> + struct pmem_device *pmem =
> + container_of(pgmap, struct pmem_device, pgmap);
> + u64 offset = PFN_PHYS(pfn) - pmem->phys_addr - pmem->data_offset;
> +
> + return dax_holder_notify_failure(pmem->dax_dev, offset, len, mf_flags);
> +}
> +
> +static const struct dev_pagemap_ops fsdax_pagemap_ops = {
> + .memory_failure = pmem_pagemap_memory_failure,
> +};
> +
> static int pmem_attach_disk(struct device *dev,
> struct nd_namespace_common *ndns)
> {
> @@ -427,6 +441,7 @@ static int pmem_attach_disk(struct device *dev,
> pmem->pfn_flags = PFN_DEV;
> if (is_nd_pfn(dev)) {
> pmem->pgmap.type = MEMORY_DEVICE_FS_DAX;
> + pmem->pgmap.ops = &fsdax_pagemap_ops;
> addr = devm_memremap_pages(dev, &pmem->pgmap);
> pfn_sb = nd_pfn->pfn_sb;
> pmem->data_offset = le64_to_cpu(pfn_sb->dataoff);
> @@ -440,6 +455,7 @@ static int pmem_attach_disk(struct device *dev,
> pmem->pgmap.range.end = res->end;
> pmem->pgmap.nr_range = 1;
> pmem->pgmap.type = MEMORY_DEVICE_FS_DAX;
> + pmem->pgmap.ops = &fsdax_pagemap_ops;
> addr = devm_memremap_pages(dev, &pmem->pgmap);
> pmem->pfn_flags |= PFN_MAP;
> bb_range = pmem->pgmap.range;
> diff --git a/include/linux/memremap.h b/include/linux/memremap.h
> index 1fafcc38acba..f739318b496f 100644
> --- a/include/linux/memremap.h
> +++ b/include/linux/memremap.h
> @@ -77,6 +77,18 @@ struct dev_pagemap_ops {
> * the page back to a CPU accessible page.
> */
> vm_fault_t (*migrate_to_ram)(struct vm_fault *vmf);
> +
> + /*
> + * Handle the memory failure happens on a range of pfns. Notify the
> + * processes who are using these pfns, and try to recover the data on
> + * them if necessary. The mf_flags is finally passed to the recover
> + * function through the whole notify routine.
> + *
> + * When this is not implemented, or it returns -EOPNOTSUPP, the caller
> + * will fall back to a common handler called mf_generic_kill_procs().
> + */
> + int (*memory_failure)(struct dev_pagemap *pgmap, unsigned long pfn,
> + u64 len, int mf_flags);

I think it is odd to have the start address be in terms of pfns and
the length by in terms of bytes. I would either change @len to
@nr_pages, or change @pfn to @phys and make it a phys_addr_t.

Otherwise you can add,

Reviewed-by: Dan Williams <dan.j.williams@intel.com>

\
 
 \ /
  Last update: 2022-02-15 23:39    [W:0.225 / U:26.668 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site