lkml.org 
[lkml]   [2021]   [Jul]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
SubjectRe: [BUG] arm64: an infinite loop in generic_perform_write()
From
Date
On 2021-07-06 18:50, Catalin Marinas wrote:
> On Mon, Jun 28, 2021 at 05:22:30PM +0100, Robin Murphy wrote:
>> diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
>> index 043da90f5dd7..cfb598ae4812 100644
>> --- a/arch/arm64/lib/copy_to_user.S
>> +++ b/arch/arm64/lib/copy_to_user.S
>> @@ -32,7 +32,7 @@
>> .endm
>> .macro strh1 reg, ptr, val
>> - user_ldst 9998f, sttrh, \reg, \ptr, \val
>> + user_ldst 9997f, sttrh, \reg, \ptr, \val
>> .endm
>> .macro ldr1 reg, ptr, val
>> @@ -40,7 +40,7 @@
>> .endm
>> .macro str1 reg, ptr, val
>> - user_ldst 9998f, sttr, \reg, \ptr, \val
>> + user_ldst 9997f, sttr, \reg, \ptr, \val
>> .endm
>> .macro ldp1 reg1, reg2, ptr, val
>> @@ -48,12 +48,14 @@
>> .endm
>> .macro stp1 reg1, reg2, ptr, val
>> - user_stp 9998f, \reg1, \reg2, \ptr, \val
>> + user_stp 9997f, \reg1, \reg2, \ptr, \val
>> .endm
>> end .req x5
>> +srcin .req x15
>> SYM_FUNC_START(__arch_copy_to_user)
>> add end, x0, x2
>> + mov srcin, x1
>> #include "copy_template.S"
>> mov x0, #0
>> ret
>> @@ -62,6 +64,12 @@ EXPORT_SYMBOL(__arch_copy_to_user)
>> .section .fixup,"ax"
>> .align 2
>> +9997: cmp dst, dstin
>> + b.ne 9998f
>> + // Before being absolutely sure we couldn't copy anything, try harder
>> + ldrb tmp1w, [srcin]
>> +USER(9998f, sttrb tmp1w, [dstin])
>> + add dst, dstin, #1
>> 9998: sub x0, end, dst // bytes not copied
>> ret
>> .previous
>
> I think it's worth doing the copy_to_user() fallback in a loop until it
> faults or hits the end of the buffer. This would solve the problem we
> currently have with writing more bytes than actually reported. The
> copy_from_user() is not necessary, a byte would suffice.

The thing is, we don't really have that problem since the set_fs cleanup
removed IMP-DEF STP behaviour from the picture - even with the current
mess we could perfectly well know which of the two STTRs faulted if we
just put a little more effort in. Even, at worst, simply this:

diff --git a/arch/arm64/include/asm/asm-uaccess.h
b/arch/arm64/include/asm/asm-uaccess.h
index ccedf548dac9..7513758bab3a 100644
--- a/arch/arm64/include/asm/asm-uaccess.h
+++ b/arch/arm64/include/asm/asm-uaccess.h
@@ -74,8 +74,9 @@ alternative_else_nop_endif

.macro user_stp l, reg1, reg2, addr, post_inc
8888: sttr \reg1, [\addr];
-8889: sttr \reg2, [\addr, #8];
- add \addr, \addr, \post_inc;
+ add \addr, \addr, \post_inc / 2;
+8889: sttr \reg2, [\addr];
+ add \addr, \addr, \post_inc / 2;

_asm_extable 8888b,\l;
_asm_extable 8889b,\l;
But yuck... If you think the potential under-reporting is worth fixing
right now, rather than just letting it disappear in a future rewrite,
then I'd still rather do it by passing the actual fault address to the
current copy_to_user fixup. A retry loop could still technically
under-report if the page disappears (or tag changes) between faulting on
the second word of a pair and retrying from the first, so we'd want to
pin the initial fault down to a single access anyway. All the loop would
achieve after that is potentially fill in an extra 1-7 bytes right up to
the offending page/tag boundary for the sake of being nice, which I
remain unconvinced is worth the bother :)

Cheers,
Robin.

\
 
 \ /
  Last update: 2021-07-06 21:18    [W:0.092 / U:1.156 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site