lkml.org 
[lkml]   [2020]   [Apr]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    SubjectRe: [PATCH v3 2/5] arch/x86: Refactor tlbflush and l1d flush
    Date
    Balbir Singh <sblbir@amazon.com> writes:
    > +void populate_tlb_with_flush_pages(void *l1d_flush_pages);
    > +void flush_l1d_cache_sw(void *l1d_flush_pages);
    > +int flush_l1d_cache_hw(void);

    l1d_flush_populate_pages();
    l1d_flush_sw()
    l1d_flush_hw()

    Hmm?

    > +void populate_tlb_with_flush_pages(void *l1d_flush_pages)
    > +{
    > + int size = PAGE_SIZE << L1D_CACHE_ORDER;
    > +
    > + asm volatile(
    > + /* First ensure the pages are in the TLB */
    > + "xorl %%eax, %%eax\n"
    > + ".Lpopulate_tlb:\n\t"
    > + "movzbl (%[flush_pages], %%" _ASM_AX "), %%ecx\n\t"
    > + "addl $4096, %%eax\n\t"
    > + "cmpl %%eax, %[size]\n\t"
    > + "jne .Lpopulate_tlb\n\t"
    > + "xorl %%eax, %%eax\n\t"
    > + "cpuid\n\t"
    > + :: [flush_pages] "r" (l1d_flush_pages),
    > + [size] "r" (size)
    > + : "eax", "ebx", "ecx", "edx");
    > +}
    > +EXPORT_SYMBOL_GPL(populate_tlb_with_flush_pages);

    I probably missed the fine print in the change log why this is separate
    from the SW flush function.

    > +int flush_l1d_cache_hw(void)
    > +{
    > + if (static_cpu_has(X86_FEATURE_FLUSH_L1D)) {
    > + wrmsrl(MSR_IA32_FLUSH_CMD, L1D_FLUSH);
    > + return 0;
    > + }
    > + return -ENOTSUPP;
    > +}
    > +EXPORT_SYMBOL_GPL(flush_l1d_cache_hw);

    along with the explanation why this needs to be two functions.

    > - if (static_cpu_has(X86_FEATURE_FLUSH_L1D)) {
    > - wrmsrl(MSR_IA32_FLUSH_CMD, L1D_FLUSH);
    > + if (flush_l1d_cache_hw() == 0)
    > return;
    > - }

    if (!l1d_flush_hw())
    return;

    > - asm volatile(
    > - /* First ensure the pages are in the TLB */
    > - "xorl %%eax, %%eax\n"
    > - ".Lpopulate_tlb:\n\t"
    > - "movzbl (%[flush_pages], %%" _ASM_AX "), %%ecx\n\t"
    > - "addl $4096, %%eax\n\t"
    > - "cmpl %%eax, %[size]\n\t"
    > - "jne .Lpopulate_tlb\n\t"
    > - "xorl %%eax, %%eax\n\t"
    > - "cpuid\n\t"
    > - /* Now fill the cache */
    > - "xorl %%eax, %%eax\n"
    > - ".Lfill_cache:\n"
    > - "movzbl (%[flush_pages], %%" _ASM_AX "), %%ecx\n\t"
    > - "addl $64, %%eax\n\t"
    > - "cmpl %%eax, %[size]\n\t"
    > - "jne .Lfill_cache\n\t"
    > - "lfence\n"
    > - :: [flush_pages] "r" (vmx_l1d_flush_pages),
    > - [size] "r" (size)
    > - : "eax", "ebx", "ecx", "edx");
    > + preempt_disable();
    > + populate_tlb_with_flush_pages(vmx_l1d_flush_pages);
    > + flush_l1d_cache_sw(vmx_l1d_flush_pages);
    > + preempt_enable();

    The preempt_disable/enable was not there before, right? Why do we need
    that now? If this is a fix, then that should be a separate patch.

    Thanks,

    tglx

    \
     
     \ /
      Last update: 2020-04-17 15:03    [W:5.507 / U:0.032 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site