lkml.org 
[lkml]   [2016]   [Feb]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH V3] powerpc/mm: Fix Multi hit ERAT cause by recent THP update
Date
Balbir Singh <bsingharora@gmail.com> writes:

> On Tue, 2016-02-09 at 06:50 +0530, Aneesh Kumar K.V wrote:
>> 
>> Also make sure we wait for irq disable section in other cpus to finish
>> before flipping a huge pte entry with a regular pmd entry. Code paths
>> like find_linux_pte_or_hugepte depend on irq disable to get
>> a stable pte_t pointer. A parallel thp split need to make sure we
>> don't convert a pmd pte to a regular pmd entry without waiting for the
>> irq disable section to finish.
>>
>> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
>> ---
>>  arch/powerpc/include/asm/book3s/64/pgtable.h |  4 ++++
>>  arch/powerpc/mm/pgtable_64.c                 | 35
>> +++++++++++++++++++++++++++-
>>  include/asm-generic/pgtable.h                |  8 +++++++
>>  mm/huge_memory.c                             |  1 +
>>  4 files changed, 47 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h
>> b/arch/powerpc/include/asm/book3s/64/pgtable.h
>> index 8d1c41d28318..ac07a30a7934 100644
>> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
>> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
>> @@ -281,6 +281,10 @@ extern pgtable_t pgtable_trans_huge_withdraw(struct
>> mm_struct *mm, pmd_t *pmdp);
>>  extern void pmdp_invalidate(struct vm_area_struct *vma, unsigned long
>> address,
>>       pmd_t *pmdp);
>>  
>> +#define __HAVE_ARCH_PMDP_HUGE_SPLIT_PREPARE
>> +extern void pmdp_huge_split_prepare(struct vm_area_struct *vma,
>> +     unsigned long address, pmd_t *pmdp);
>> +
>>  #define pmd_move_must_withdraw pmd_move_must_withdraw
>>  struct spinlock;
>>  static inline int pmd_move_must_withdraw(struct spinlock *new_pmd_ptl,
>> diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
>> index 3124a20d0fab..c8a00da39969 100644
>> --- a/arch/powerpc/mm/pgtable_64.c
>> +++ b/arch/powerpc/mm/pgtable_64.c
>> @@ -646,6 +646,30 @@ pgtable_t pgtable_trans_huge_withdraw(struct mm_struct
>> *mm, pmd_t *pmdp)
>>   return pgtable;
>>  }
>>  
>> +void pmdp_huge_split_prepare(struct vm_area_struct *vma,
>> +      unsigned long address, pmd_t *pmdp)
>> +{
>> + VM_BUG_ON(address & ~HPAGE_PMD_MASK);
>> +
>> +#ifdef CONFIG_DEBUG_VM
>> + BUG_ON(REGION_ID(address) != USER_REGION_ID);
>> +#endif
>> + /*
>> +  * We can't mark the pmd none here, because that will cause a race
>> +  * against exit_mmap. We need to continue mark pmd TRANS HUGE, while
>> +  * we spilt, but at the same time we wan't rest of the ppc64 code
>> +  * not to insert hash pte on this, because we will be modifying
>> +  * the deposited pgtable in the caller of this function. Hence
>> +  * clear the _PAGE_USER so that we move the fault handling to
>> +  * higher level function and that will serialize against ptl.
>> +  * We need to flush existing hash pte entries here even though,
>> +  * the translation is still valid, because we will withdraw
>> +  * pgtable_t after this.
>> +  */
>> + pmd_hugepage_update(vma->vm_mm, address, pmdp, _PAGE_USER, 0);
>
> Can this break any checks for _PAGE_USER? From other paths?


Should not, that is the same condition we use for autonuma.

>
>> +}
>> +
>> +
>>  /*
>>   * set a new huge pmd. We should not be called for updating
>>   * an existing pmd entry. That should go via pmd_hugepage_update.
>> @@ -663,10 +687,19 @@ void set_pmd_at(struct mm_struct *mm, unsigned long
>> addr,
>>   return set_pte_at(mm, addr, pmdp_ptep(pmdp), pmd_pte(pmd));
>>  }
>>  
>> +/*
>> + * We use this to invalidate a pmdp entry before switching from a
>> + * hugepte to regular pmd entry.
>> + */
>>  void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
>>        pmd_t *pmdp)
>>  {
>> - pmd_hugepage_update(vma->vm_mm, address, pmdp, _PAGE_PRESENT, 0);
>> + pmd_hugepage_update(vma->vm_mm, address, pmdp, ~0UL, 0);
>> + /*
>> +  * This ensures that generic code that rely on IRQ disabling
>> +  * to prevent a parallel THP split work as expected.
>> +  */
>> + kick_all_cpus_sync();
>
> Seems expensive, anyway I think the right should do something like or a wrapper
> for it
>
> on_each_cpu_mask(mm_cpumask(vma->vm_mm), do_nothing, NULL, 1);
>
> do_nothing is not exported, but that can be fixed :)
>

Now we can't depend for mm_cpumask, a parallel find_linux_pte_hugepte
can happen outside that. Now i had a variant for kick_all_cpus_sync that
ignored idle cpus. But then that needs more verification.

http://article.gmane.org/gmane.linux.ports.ppc.embedded/81105

-aneesh

\
 
 \ /
  Last update: 2016-02-15 06:01    [W:0.088 / U:0.284 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site