lkml.org 
[lkml]   [2019]   [Jan]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v3 1/3] powerpc/mm: prepare kernel for KAsan on PPC32
From
Date


Le 15/01/2019 à 18:10, Dmitry Vyukov a écrit :
> On Tue, Jan 15, 2019 at 6:06 PM Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
>>
>>
>>
>> On 1/15/19 2:14 PM, Dmitry Vyukov wrote:
>>> On Tue, Jan 15, 2019 at 8:27 AM Christophe Leroy
>>> <christophe.leroy@c-s.fr> wrote:
>>>> On 01/14/2019 09:34 AM, Dmitry Vyukov wrote:
>>>>> On Sat, Jan 12, 2019 at 12:16 PM Christophe Leroy
>>>>> <christophe.leroy@c-s.fr> wrote:
>>>>> &gt;
>>>>> &gt; In kernel/cputable.c, explicitly use memcpy() in order
>>>>> &gt; to allow GCC to replace it with __memcpy() when KASAN is
>>>>> &gt; selected.
>>>>> &gt;
>>>>> &gt; Since commit 400c47d81ca38 ("powerpc32: memset: only use dcbz once cache is
>>>>> &gt; enabled"), memset() can be used before activation of the cache,
>>>>> &gt; so no need to use memset_io() for zeroing the BSS.
>>>>> &gt;
>>>>> &gt; Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
>>>>> &gt; ---
>>>>> &gt; arch/powerpc/kernel/cputable.c | 4 ++--
>>>>> &gt; arch/powerpc/kernel/setup_32.c | 6 ++----
>>>>> &gt; 2 files changed, 4 insertions(+), 6 deletions(-)
>>>>> &gt;
>>>>> &gt; diff --git a/arch/powerpc/kernel/cputable.c
>>>>> b/arch/powerpc/kernel/cputable.c
>>>>> &gt; index 1eab54bc6ee9..84814c8d1bcb 100644
>>>>> &gt; --- a/arch/powerpc/kernel/cputable.c
>>>>> &gt; +++ b/arch/powerpc/kernel/cputable.c
>>>>> &gt; @@ -2147,7 +2147,7 @@ void __init set_cur_cpu_spec(struct cpu_spec *s)
>>>>> &gt; struct cpu_spec *t = &amp;the_cpu_spec;
>>>>> &gt;
>>>>> &gt; t = PTRRELOC(t);
>>>>> &gt; - *t = *s;
>>>>> &gt; + memcpy(t, s, sizeof(*t));
>>>>>
>>>>> Hi Christophe,
>>>>>
>>>>> I understand why you are doing this, but this looks a bit fragile and
>>>>> non-scalable. This may not work with the next version of compiler,
>>>>> just different than yours version of compiler, clang, etc.
>>>>
>>>> My felling would be that this change makes it more solid.
>>>>
>>>> My understanding is that when you do *t = *s, the compiler can use
>>>> whatever way it wants to do the copy.
>>>> When you do memcpy(), you ensure it will do it that way and not another
>>>> way, don't you ?
>>>
>>> It makes this single line more deterministic wrt code-gen (though,
>>> strictly saying compiler can turn memcpy back into inlines
>>> instructions, it knows memcpy semantics anyway).
>>> But the problem I meant is that the set of places that are subject to
>>> this problem is not deterministic. So if we go with this solution,
>>> after this change it's in the status "works on your machine" and we
>>> either need to commit to not using struct copies and zeroing
>>> throughout kernel code or potentially have a long tail of other
>>> similar cases, and since they can be triggered by another compiler
>>> version, we may need to backport these changes to previous releases
>>> too. Whereas if we would go with compiler flags, it would prevent the
>>> problem in all current and future places and with other past/future
>>> versions of compilers.
>>>
>>
>> The patch will work for any compiler. The point of this patch is to make
>> memcpy() visible to the preprocessor which will replace it with __memcpy().
>
> For this single line, yes. But it does not mean that KASAN will work.
>
>> After preprocessor's work, compiler will see just __memcpy() call here.

This problem can affect any arch I believe. Maybe the 'solution' would
be to run a generic script similar to
arch/powerpc/kernel/prom_init_check.sh on all objects compiled with
KASAN_SANITIZE_object.o := n don't include any reference to memcpy()
memset() or memmove() ?

Christophe

\
 
 \ /
  Last update: 2019-01-15 18:26    [W:0.056 / U:0.884 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site