Messages in this thread Patch in this message | | | From | Gavin Shan <> | Subject | [PATCH] arm64: tlb: Fix TLBI RANGE operand | Date | Wed, 3 Apr 2024 16:49:29 +1000 |
| |
KVM/arm64 relies on TLBI RANGE feature to flush TLBs when the dirty bitmap is collected by VMM and the corresponding PTEs need to be write-protected again. Unfortunately, the operand passed to the TLBI RANGE instruction isn't correctly sorted out by commit d1d3aa98b1d4 ("arm64: tlb: Use the TLBI RANGE feature in arm64"). It leads to crash on the destination VM after live migration because some of the dirty pages are missed.
For example, I have a VM where 8GB memory is assigned, starting from 0x40000000 (1GB). Note that the host has 4KB as the base page size. All TLBs for VM can be covered by one TLBI RANGE operation. However, I receives 0xffff708000040000 as the operand, which is wrong and the correct one should be 0x00007f8000040000. From the wrong operand, we have 3 and 1 for SCALE (bits[45:44) and NUM (bits943:39], only 1GB instead of 8GB memory is covered.
Fix the macro __TLBI_RANGE_NUM() so that the correct NUM and TLBI RANGE operand are provided.
Fixes: d1d3aa98b1d4 ("arm64: tlb: Use the TLBI RANGE feature in arm64") Cc: stable@kernel.org # v5.10+ Reported-by: Yihuang Yu <yihyu@redhat.com> Signed-off-by: Gavin Shan <gshan@redhat.com> --- arch/arm64/include/asm/tlbflush.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index 3b0e8248e1a4..07c4fb4b82b4 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -166,7 +166,7 @@ static inline unsigned long get_trans_granule(void) */ #define TLBI_RANGE_MASK GENMASK_ULL(4, 0) #define __TLBI_RANGE_NUM(pages, scale) \ - ((((pages) >> (5 * (scale) + 1)) & TLBI_RANGE_MASK) - 1) + ((((pages) >> (5 * (scale) + 1)) - 1) & TLBI_RANGE_MASK) /* * TLB Invalidation -- 2.44.0
| |