lkml.org 
[lkml]   [2013]   [Nov]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [tip:x86/asm] x86-64, copy_user: Remove zero byte check before copy user buffer.
On 11/16/2013 10:44 PM, Linus Torvalds wrote:
> So this doesn't do the 32-bit truncation in the error path of the
> generic string copy. Oversight?
>
> Linus

I looked at the code again, and it turns out to be false alarm.

We *do* do 32-bit truncation in every path, still:

> ENTRY(copy_user_generic_string)
> CFI_STARTPROC
> ASM_STAC
> cmpl $8,%edx
> jb 2f /* less than 8 bytes, go to byte copy loop */

-> If we jump here, we will truncate at 2:

> ALIGN_DESTINATION
> movl %edx,%ecx

-> If we don't jb 2f we end up

> shrl $3,%ecx

32-bit truncation here...

> andl $7,%edx
> 1: rep
> movsq
> 2: movl %edx,%ecx

32-bit truncation here...

> 3: rep
> movsb
> xorl %eax,%eax
> ASM_CLAC
> ret
>
> .section .fixup,"ax"
> 11: lea (%rdx,%rcx,8),%rcx
> 12: movl %ecx,%edx /* ecx is zerorest also */

-> Even if %rdx+%rcx*8 > 2^32 we end up truncating at 12: -- not that it
matters, since both arguments are prototyped as "unsigned" and therefore
the C compiler is supposed to guarantee the upper 32 bits are ignored.

So I think Fenghua's patch is fine as-is.

-hpa



\
 
 \ /
  Last update: 2013-11-20 21:01    [W:0.087 / U:0.040 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site