Messages in this thread | | | Date | Mon, 8 Apr 2024 17:13:29 +0200 | Subject | Re: [PATCH v6 2/6] mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache() | From | David Hildenbrand <> |
| |
On 08.04.24 15:27, Ryan Roberts wrote: > On 08/04/2024 13:47, Ryan Roberts wrote: >> On 08/04/2024 13:07, Ryan Roberts wrote: >>> [...] >>>> >>>> [...] >>>> >>>>> + >>>>> +/** >>>>> + * swap_pte_batch - detect a PTE batch for a set of contiguous swap entries >>>>> + * @start_ptep: Page table pointer for the first entry. >>>>> + * @max_nr: The maximum number of table entries to consider. >>>>> + * @entry: Swap entry recovered from the first table entry. >>>>> + * >>>>> + * Detect a batch of contiguous swap entries: consecutive (non-present) PTEs >>>>> + * containing swap entries all with consecutive offsets and targeting the same >>>>> + * swap type. >>>>> + * >>>> >>>> Likely you should document that any swp pte bits are ignored? () >>> >>> Now that I understand what swp pte bits are, I think the simplest thing is to >>> just make this function always consider the pte bits by using pte_same() as you >>> suggest below? I don't think there is ever a case for ignoring the swp pte bits? >>> And then I don't need to do anything special for uffd-wp either (below you >>> suggested not doing batching when the VMA has uffd enabled). >>> >>> Any concerns? >>> >>>> >>>>> + * max_nr must be at least one and must be limited by the caller so scanning >>>>> + * cannot exceed a single page table. >>>>> + * >>>>> + * Return: the number of table entries in the batch. >>>>> + */ >>>>> +static inline int swap_pte_batch(pte_t *start_ptep, int max_nr, >>>>> + swp_entry_t entry) >>>>> +{ >>>>> + const pte_t *end_ptep = start_ptep + max_nr; >>>>> + unsigned long expected_offset = swp_offset(entry) + 1; >>>>> + unsigned int expected_type = swp_type(entry); >>>>> + pte_t *ptep = start_ptep + 1; >>>>> + >>>>> + VM_WARN_ON(max_nr < 1); >>>>> + VM_WARN_ON(non_swap_entry(entry)); >>>>> + >>>>> + while (ptep < end_ptep) { >>>>> + pte_t pte = ptep_get(ptep); >>>>> + >>>>> + if (pte_none(pte) || pte_present(pte)) >>>>> + break; >>>>> + >>>>> + entry = pte_to_swp_entry(pte); >>>>> + >>>>> + if (non_swap_entry(entry) || >>>>> + swp_type(entry) != expected_type || >>>>> + swp_offset(entry) != expected_offset) >>>>> + break; >>>>> + >>>>> + expected_offset++; >>>>> + ptep++; >>>>> + } >>>>> + >>>>> + return ptep - start_ptep; >>>>> +} >>>> >>>> Looks very clean :) >>>> >>>> I was wondering whether we could similarly construct the expected swp PTE and >>>> only check pte_same. >>>> >>>> expected_pte = __swp_entry_to_pte(__swp_entry(expected_type, expected_offset)); >>> >>> So planning to do this. >> >> Of course this clears all the swp pte bits in expected_pte. So need to do something a bit more complex. >> >> If we can safely assume all offset bits are contiguous in every per-arch representation then we can do: > > Looks like at least csky and hexagon store the offset in discontiguous regions. > So it will have to be the second approach if we want to avoid anything > arch-specific. I'll assume that for now; we can always specialize > pte_next_swp_offset() per-arch in the future if needed.
Sounds good. Just have a generic variant as you proposed, and add the per-arch one if really required later.
-- Cheers,
David / dhildenb
| |