Messages in this thread | | | From | Ville Syrjälä <> | Subject | Re: [PATCHv2] mm/slub: fix lockups on PREEMPT && !SMP kernels | Date | Tue, 24 Mar 2015 14:17:53 +0000 (UTC) |
| |
Mark Rutland <mark.rutland <at> arm.com> writes:
> > Commit 9aabf810a67cd97e ("mm/slub: optimize alloc/free fastpath by > removing preemption on/off") introduced an occasional hang for kernels > built with CONFIG_PREEMPT && !CONFIG_SMP. > > The problem is the following loop the patch introduced to > slab_alloc_node and slab_free: > > do { > tid = this_cpu_read(s->cpu_slab->tid); > c = raw_cpu_ptr(s->cpu_slab); > } while (IS_ENABLED(CONFIG_PREEMPT) && unlikely(tid != c->tid)); > > GCC 4.9 has been observed to hoist the load of c and c->tid above the > loop for !SMP kernels (as in this case raw_cpu_ptr(x) is compile-time > constant and does not force a reload). On arm64 the generated assembly > looks like: > > ffffffc00016d3c4: f9400404 ldr x4, [x0,#8] > ffffffc00016d3c8: f9400401 ldr x1, [x0,#8] > ffffffc00016d3cc: eb04003f cmp x1, x4 > ffffffc00016d3d0: 54ffffc1 b.ne ffffffc00016d3c8 <slab_alloc_node.constprop.82+0x30>
Just FYI I've hit this problem on x86. I think I used gcc 4.7 and 4.8, and maybe also 4.6 earlier.
Here's the diff in asm after applying the fix:
240a: 0f 85 ce 00 00 00 jne 24de <kmem_cache_free+0x10e> 2410: 89 75 f0 mov %esi,-0x10(%ebp) 2413: 8b 07 mov (%edi),%eax - 2415: 8b 50 04 mov 0x4(%eax),%edx - 2418: 8b 48 04 mov 0x4(%eax),%ecx - 241b: 39 d1 cmp %edx,%ecx - 241d: 75 f9 jne 2418 <kmem_cache_free+0x48> + 2415: 8b 48 04 mov 0x4(%eax),%ecx + 2418: 8b 50 04 mov 0x4(%eax),%edx + 241b: 39 ca cmp %ecx,%edx + 241d: 75 f6 jne 2415 <kmem_cache_free+0x45> 241f: 8b 4d f0 mov -0x10(%ebp),%ecx 2422: 39 48 08 cmp %ecx,0x8(%eax) 2425: 0f 85 9a 00 00 00 jne 24c5 <kmem_cache_free+0xf5>
As 3.19+ is broken now, this should go into stable. Cc: stable@vger.kernel.org
| |