lkml.org 
[lkml]   [2020]   [Dec]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [RFC PATCH 2/3] KVM: arm64: Fix handling of merging tables into a block entry
From
Date

On 2020/12/1 21:46, Will Deacon wrote:
> On Tue, Dec 01, 2020 at 10:30:41AM +0800, wangyanan (Y) wrote:
>> On 2020/12/1 0:01, Will Deacon wrote:
>>> On Mon, Nov 30, 2020 at 11:24:19PM +0800, wangyanan (Y) wrote:
>>>> On 2020/11/30 21:34, Will Deacon wrote:
>>>>> On Mon, Nov 30, 2020 at 08:18:46PM +0800, Yanan Wang wrote:
>>>>>> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
>>>>>> index 696b6aa83faf..fec8dc9f2baa 100644
>>>>>> --- a/arch/arm64/kvm/hyp/pgtable.c
>>>>>> +++ b/arch/arm64/kvm/hyp/pgtable.c
>>>>>> @@ -500,6 +500,9 @@ static int stage2_map_walk_table_pre(u64 addr, u64 end, u32 level,
>>>>>> return 0;
>>>>>> }
>>>>>> +static void stage2_flush_dcache(void *addr, u64 size);
>>>>>> +static bool stage2_pte_cacheable(kvm_pte_t pte);
>>>>>> +
>>>>>> static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
>>>>>> struct stage2_map_data *data)
>>>>>> {
>>>>>> @@ -507,9 +510,17 @@ static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
>>>>>> struct page *page = virt_to_page(ptep);
>>>>>> if (data->anchor) {
>>>>>> - if (kvm_pte_valid(pte))
>>>>>> + if (kvm_pte_valid(pte)) {
>>>>>> + kvm_set_invalid_pte(ptep);
>>>>>> + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu,
>>>>>> + addr, level);
>>>>>> put_page(page);
>>>>> This doesn't make sense to me: the page-table pages we're walking when the
>>>>> anchor is set are not accessible to the hardware walker because we unhooked
>>>>> the entire sub-table in stage2_map_walk_table_pre(), which has the necessary
>>>>> TLB invalidation.
>>>>>
>>>>> Are you seeing a problem in practice here?
>>>> Yes, I indeed find a problem in practice.
>>>>
>>>> When the migration was cancelled, a TLB conflic abort  was found in guest.
>>>>
>>>> This problem is fixed before rework of the page table code, you can have a
>>>> look in the following two links:
>>>>
>>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=3c3736cd32bf5197aed1410ae826d2d254a5b277
>>>>
>>>> https://lists.cs.columbia.edu/pipermail/kvmarm/2019-March/035031.html
>>> Ok, let's go through this, because I still don't see the bug. Please correct
>>> me if you spot any mistakes:
>>>
>>> 1. We have a block mapping for X => Y
>>> 2. Dirty logging is enabled, so the block mapping is write-protected and
>>> ends up being split into page mappings
>>> 3. Dirty logging is disabled due to a failed migration.
>>>
>>> --- At this point, I think we agree that the state of the MMU is alright ---
>>>
>>> 4. We take a stage-2 fault and want to reinstall the block mapping:
>>>
>>> a. kvm_pgtable_stage2_map() is invoked to install the block mapping
>>> b. stage2_map_walk_table_pre() finds a table where we would like to
>>> install the block:
>>>
>>> i. The anchor is set to point at this entry
>>> ii. The entry is made invalid
>>> iii. We invalidate the TLB for the input address. This is
>>> TLBI IPAS2SE1IS without level hint and then TLBI VMALLE1IS.
>>>
>>> *** At this point, the page-table pointed to by the old table entry
>>> is not reachable to the hardware walker ***
>>>
>>> c. stage2_map_walk_leaf() is called for each leaf entry in the
>>> now-unreachable subtree, dropping page-references for each valid
>>> entry it finds.
>>> d. stage2_map_walk_table_post() is eventually called for the entry
>>> which we cleared back in b.ii, so we install the new block mapping.
>>>
>>> You are proposing to add additional TLB invalidation to (c), but I don't
>>> think that is necessary, thanks to the invalidation already performed in
>>> b.iii. What am I missing here?
>> The point is at b.iii where the TLBI is not enough. There are many page
>> mappings that we need to merge into a block mapping.
>>
>> We invalidate the TLB for the input address without level hint at b.iii, but
>> this operation just flush TLB for one page mapping, there
>>
>> are still some TLB entries for the other page mappings in the cache, the MMU
>> hardware walker can still hit these entries next time.
> Ah, yes, I see. Thanks. I hadn't considered the case where there are table
> entries beneath the anchor. So how about the diff below?
>
> Will
>
> --->8

Hi, I think it's inappropriate to put the TLBI of all the leaf entries
in function stage2_map_walk_table_post(),

because the *ptep must be an upper table entry when we enter
stage2_map_walk_table_post().

We should make the TLBI for every leaf entry not table entry in the last
lookup level,  just as I am proposing

to add the additional TLBI in function stage2_map_walk_leaf().

Thanks.


Yanan

>
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index 0271b4a3b9fe..12526d8c7ae4 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -493,7 +493,7 @@ static int stage2_map_walk_table_pre(u64 addr, u64 end, u32 level,
> return 0;
>
> kvm_set_invalid_pte(ptep);
> - kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, addr, 0);
> + /* TLB invalidation is deferred until the _post handler */
> data->anchor = ptep;
> return 0;
> }
> @@ -547,11 +547,21 @@ static int stage2_map_walk_table_post(u64 addr, u64 end, u32 level,
> struct stage2_map_data *data)
> {
> int ret = 0;
> + kvm_pte_t pte = *ptep;
>
> if (!data->anchor)
> return 0;
>
> - free_page((unsigned long)kvm_pte_follow(*ptep));
> + kvm_set_invalid_pte(ptep);
> +
> + /*
> + * Invalidate the whole stage-2, as we may have numerous leaf
> + * entries below us which would otherwise need invalidating
> + * individually.
> + */
> + kvm_call_hyp(__kvm_tlb_flush_vmid, data->mmu);
> +
> + free_page((unsigned long)kvm_pte_follow(pte));
> put_page(virt_to_page(ptep));
>
> if (data->anchor == ptep) {
> .

\
 
 \ /
  Last update: 2020-12-01 15:14    [W:0.102 / U:6.740 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site