lkml.org 
[lkml]   [2014]   [Apr]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [GIT PULL] Detaching mounts on unlink for 3.15-rc1
On Wed, Apr 09, 2014 at 10:32:14AM -0700, Eric W. Biederman wrote:

> For resolving a deeply nested symlink that hits the limit of 8 nested
> symlinks, I find 4688 bytes left on the stack. Which means we use
> roughly 3504 bytes of stack when stating a deeply nested symlink.
>
> For umount I had a little trouble measuring as typically the work done
> by umount was not the largest stack consumer, but I found for a small
> ext4 filesystem after the umount operation was complete there were
> 5152 bytes left on the stack, or umount used roughly 3040 bytes.

A bit less - we have a non-empty stack footprint from sys_umount() itself.

> 3504 + 3040 = 6544 bytes of stack used or 1684 bytes of stack left
> unused. Which certainly isn't a lot of margin but it is not overflowing
> the kernel stack either.
>
> Is there a case that see where umount uses a lot more kernel stack? Is
> your concern an architecture other than x86_64 with different
> limitations?

For starters, put that ext4 on top of dm-raid or dm-multipath. That alone
will very likely push you over the top.

Keep in mind, BTW, that you do not have full 8K to play with - there's
struct thread_info that should not be stepped upon. Not particulary large
(IIRC, restart_block is the largest piece in amd64 one), but it eats about
100 bytes.

I'd probably use renameat(2) in testing - i.e. trigger the shite when
resolving a deeply nested symlink in renameat() arguments. That brings
extra struct nameidata into the game, i.e. extra 152 bytes chewed off the
stack.


\
 
 \ /
  Last update: 2014-04-09 20:21    [W:0.430 / U:0.352 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site