lkml.org 
[lkml]   [2020]   [Aug]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH 20/35] arm64: mte: Add in-kernel MTE helpers
On Fri, Aug 14, 2020 at 07:27:02PM +0200, Andrey Konovalov wrote:
> diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h
> index 1c99fcadb58c..733be1cb5c95 100644
> --- a/arch/arm64/include/asm/mte.h
> +++ b/arch/arm64/include/asm/mte.h
> @@ -5,14 +5,19 @@
> #ifndef __ASM_MTE_H
> #define __ASM_MTE_H
>
> -#define MTE_GRANULE_SIZE UL(16)
> +#include <asm/mte_asm.h>

So the reason for this move is to include it in asm/cache.h. Fine by
me but...

> #define MTE_GRANULE_MASK (~(MTE_GRANULE_SIZE - 1))
> #define MTE_TAG_SHIFT 56
> #define MTE_TAG_SIZE 4
> +#define MTE_TAG_MASK GENMASK((MTE_TAG_SHIFT + (MTE_TAG_SIZE - 1)), MTE_TAG_SHIFT)
> +#define MTE_TAG_MAX (MTE_TAG_MASK >> MTE_TAG_SHIFT)

... I'd rather move all these definitions in a file with a more
meaningful name like mte-def.h. The _asm implies being meant for .S
files inclusion which isn't the case.

> diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
> index eb39504e390a..e2d708b4583d 100644
> --- a/arch/arm64/kernel/mte.c
> +++ b/arch/arm64/kernel/mte.c
> @@ -72,6 +74,47 @@ int memcmp_pages(struct page *page1, struct page *page2)
> return ret;
> }
>
> +u8 mte_get_mem_tag(void *addr)
> +{
> + if (system_supports_mte())
> + addr = mte_assign_valid_ptr_tag(addr);

The mte_assign_valid_ptr_tag() is slightly misleading. All it does is
read the allocation tag from memory.

I also think this should be inline asm, possibly using alternatives.
It's just an LDG instruction (and it saves us from having to invent a
better function name).

> +
> + return 0xF0 | mte_get_ptr_tag(addr);
> +}
> +
> +u8 mte_get_random_tag(void)
> +{
> + u8 tag = 0xF;
> +
> + if (system_supports_mte())
> + tag = mte_get_ptr_tag(mte_assign_random_ptr_tag(NULL));

Another alternative inline asm with an IRG instruction.

> +
> + return 0xF0 | tag;
> +}
> +
> +void * __must_check mte_set_mem_tag_range(void *addr, size_t size, u8 tag)
> +{
> + void *ptr = addr;
> +
> + if ((!system_supports_mte()) || (size == 0))
> + return addr;
> +
> + tag = 0xF0 | (tag & 0xF);
> + ptr = (void *)__tag_set(ptr, tag);
> + size = ALIGN(size, MTE_GRANULE_SIZE);

I think aligning the size is dangerous. Can we instead turn it into a
WARN_ON if not already aligned? At a quick look, the callers of
kasan_{un,}poison_memory() already align the size.

> +
> + mte_assign_mem_tag_range(ptr, size);
> +
> + /*
> + * mte_assign_mem_tag_range() can be invoked in a multi-threaded
> + * context, ensure that tags are written in memory before the
> + * reference is used.
> + */
> + smp_wmb();
> +
> + return ptr;

I'm not sure I understand the barrier here. It ensures the relative
ordering of memory (or tag) accesses on a CPU as observed by other CPUs.
While the first access here is setting the tag, I can't see what other
access on _this_ CPU it is ordered with.

> +}
> +
> static void update_sctlr_el1_tcf0(u64 tcf0)
> {
> /* ISB required for the kernel uaccess routines */
> diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S
> index 03ca6d8b8670..8c743540e32c 100644
> --- a/arch/arm64/lib/mte.S
> +++ b/arch/arm64/lib/mte.S
> @@ -149,3 +149,44 @@ SYM_FUNC_START(mte_restore_page_tags)
>
> ret
> SYM_FUNC_END(mte_restore_page_tags)
> +
> +/*
> + * Assign pointer tag based on the allocation tag
> + * x0 - source pointer
> + * Returns:
> + * x0 - pointer with the correct tag to access memory
> + */
> +SYM_FUNC_START(mte_assign_valid_ptr_tag)
> + ldg x0, [x0]
> + ret
> +SYM_FUNC_END(mte_assign_valid_ptr_tag)
> +
> +/*
> + * Assign random pointer tag
> + * x0 - source pointer
> + * Returns:
> + * x0 - pointer with a random tag
> + */
> +SYM_FUNC_START(mte_assign_random_ptr_tag)
> + irg x0, x0
> + ret
> +SYM_FUNC_END(mte_assign_random_ptr_tag)

As I said above, these two can be inline asm.

> +
> +/*
> + * Assign allocation tags for a region of memory based on the pointer tag
> + * x0 - source pointer
> + * x1 - size
> + *
> + * Note: size is expected to be MTE_GRANULE_SIZE aligned
> + */
> +SYM_FUNC_START(mte_assign_mem_tag_range)
> + /* if (src == NULL) return; */
> + cbz x0, 2f
> + /* if (size == 0) return; */

You could skip the cbz here and just document that the size should be
non-zero and aligned. The caller already takes care of this check.
> + cbz x1, 2f
> +1: stg x0, [x0]
> + add x0, x0, #MTE_GRANULE_SIZE
> + sub x1, x1, #MTE_GRANULE_SIZE
> + cbnz x1, 1b
> +2: ret
> +SYM_FUNC_END(mte_assign_mem_tag_range)

--
Catalin

\
 
 \ /
  Last update: 2020-08-27 11:38    [W:0.362 / U:0.808 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site