lkml.org 
[lkml]   [2017]   [Jan]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCHv2 3/5] x86/mm: fix native mmap() in compat bins and vice-versa
On Mon, Jan 16, 2017 at 4:33 AM, Dmitry Safonov <dsafonov@virtuozzo.com> wrote:
> Fix 32-bit compat_sys_mmap() mapping VMA over 4Gb in 64-bit binaries
> and 64-bit sys_mmap() mapping VMA only under 4Gb in 32-bit binaries.
> Changed arch_get_unmapped_area{,_topdown}() to recompute mmap_base
> for those cases and use according high/low limits for vm_unmapped_area()
> The recomputing of mmap_base may make compat sys_mmap() in 64-bit
> binaries a little slower than native, which uses already known from exec
> time mmap_base - but, as it returned buggy address, that case seemed
> unused previously, so no performance degradation for already used ABI.

This looks plausibly correct but rather weird -- why does this code
need to distinguish between all four cases (pure 32-bit, pure 64-bit,
64-bit mmap layout doing 32-bit call, 32-bit layout doing 64-bit
call)?

> Can be optimized in future by introducing mmap_compat_{,legacy}_base
> in mm_struct.

Hmm. Would it make sense to do it this way from the beginning?

If adding an in_32bit_syscall() helper would help, then by all means
please do so.

--Andy

\
 
 \ /
  Last update: 2017-01-17 21:30    [W:0.095 / U:0.436 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site