Messages in this thread Patch in this message | | | From | Ryan Roberts <> | Subject | [PATCH] FIXUP: mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache() | Date | Tue, 9 Apr 2024 12:18:40 +0100 |
| |
Hi Andrew,
Could you please squash this into commit "mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache()", which is already in mm-unstable?
It fixes a build warning on parisc [1] due to their implementation of __swp_entry_to_pte() not correctly putting the macro args in parenthisis. But it turns out that a bunch of other arches are also faulty in this regard.
I'm also adding an extra statement to the documentation for pte_next_swp_offset() as suggested by David.
[1] https://lore.kernel.org/all/202404091749.ScNPJ8j4-lkp@intel.com/
Thanks, Ryan
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> --- mm/internal.h | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/mm/internal.h b/mm/internal.h index 9d3250b4a08a..22152e0c8494 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -202,7 +202,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
/** * pte_next_swp_offset - Increment the swap entry offset field of a swap pte. - * @pte: The initial pte state; is_swap_pte(pte) must be true. + * @pte: The initial pte state; is_swap_pte(pte) must be true and + * non_swap_entry() must be false. * * Increments the swap offset, while maintaining all other fields, including * swap type, and any swp pte bits. The resulting pte is returned. @@ -211,7 +212,7 @@ static inline pte_t pte_next_swp_offset(pte_t pte) { swp_entry_t entry = pte_to_swp_entry(pte); pte_t new = __swp_entry_to_pte(__swp_entry(swp_type(entry), - swp_offset(entry) + 1)); + (swp_offset(entry) + 1)));
if (pte_swp_soft_dirty(pte)) new = pte_swp_mksoft_dirty(new); -- 2.25.1
| |