lkml.org 
[lkml]   [2017]   [Jun]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] xen/privcmd: Support correctly 64KB page granularity when mapping memory
From
Date
Hi,

It has been reviewed-by Boris but I don't see the patch queued. Would it
be possible to queue it for 4.12?

Cheers,

On 01/06/17 21:41, Boris Ostrovsky wrote:
> On 06/01/2017 11:38 AM, Julien Grall wrote:
>> Hi Boris,
>>
>> On 01/06/17 16:16, Boris Ostrovsky wrote:
>>> On 06/01/2017 10:01 AM, Julien Grall wrote:
>>>> Hi Boris,
>>>>
>>>> On 01/06/17 14:33, Boris Ostrovsky wrote:
>>>>> On 06/01/2017 08:50 AM, Julien Grall wrote:
>>>>>> Hi Boris,
>>>>>>
>>>>>> On 31/05/17 14:54, Boris Ostrovsky wrote:
>>>>>>> On 05/31/2017 09:03 AM, Julien Grall wrote:
>>>>>>>> Commit 5995a68 "xen/privcmd: Add support for Linux 64KB page
>>>>>>>> granularity" did
>>>>>>>> not go far enough to support 64KB in mmap_batch_fn.
>>>>>>>>
>>>>>>>> The variable 'nr' is the number of 4KB chunk to map. However, when
>>>>>>>> Linux
>>>>>>>> is using 64KB page granularity the array of pages
>>>>>>>> (vma->vm_private_data)
>>>>>>>> contain one page per 64KB. Fix it by incrementing st->index
>>>>>>>> correctly.
>>>>>>>>
>>>>>>>> Furthermore, st->va is not correctly incremented as PAGE_SIZE !=
>>>>>>>> XEN_PAGE_SIZE.
>>>>>>>>
>>>>>>>> Fixes: 5995a68 ("xen/privcmd: Add support for Linux 64KB page
>>>>>>>> granularity")
>>>>>>>> CC: stable@vger.kernel.org
>>>>>>>> Reported-by: Feng Kan <fkan@apm.com>
>>>>>>>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>>>>>>>> ---
>>>>>>>> drivers/xen/privcmd.c | 4 ++--
>>>>>>>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>>>>
>>>>>>>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>>>>>>>> index 7a92a5e1d40c..feca75b07fdd 100644
>>>>>>>> --- a/drivers/xen/privcmd.c
>>>>>>>> +++ b/drivers/xen/privcmd.c
>>>>>>>> @@ -362,8 +362,8 @@ static int mmap_batch_fn(void *data, int nr,
>>>>>>>> void *state)
>>>>>>>> st->global_error = 1;
>>>>>>>> }
>>>>>>>> }
>>>>>>>> - st->va += PAGE_SIZE * nr;
>>>>>>>> - st->index += nr;
>>>>>>>> + st->va += XEN_PAGE_SIZE * nr;
>>>>>>>> + st->index += nr / XEN_PFN_PER_PAGE;
>>>>>>>>
>>>>>>>> return 0;
>>>>>>>> }
>>>>>>>
>>>>>>>
>>>>>>> Are we still using PAGE_MASK for xen_remap_domain_gfn_array()?
>>>>>>
>>>>>> Do you mean in the xen_xlate_remap_gfn_array implementation? If so
>>>>>> there are no use of PAGE_MASK as the code has been converted to
>>>>>> support 64K page granularity.
>>>>>>
>>>>>> If you mean the x86 version of xen_remap_domain_gfn_array, then we
>>>>>> don't really care as x86 only use 4KB page granularity.
>>>>>
>>>>>
>>>>> I meant right above the change that you made. Should it also be
>>>>> replaced
>>>>> with XEN_PAGE_MASK? (Sorry for being unclear.)
>>>>
>>>> Oh. The code in xen_remap_domain_gfn_array is relying on st->va to be
>>>> page aligned. So I think we want to keep PAGE_MASK here.
>>>
>>> Doe this imply then that 'nr' 4K pages is integral number of PAGE_SIZE
>>> (i.e. (nr*XEN_PAGE_SIZE) % PAGE_SIZE == 0) and if yes --- do we test
>>> this somewhere? I don't see it.
>>
>
> I now see that this should (obviously) stay as PAGE_MASK, so
>
> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>
> but
>
>> nr might be smaller for the last batch. But all the intermediate batch
>> should have ((nr * XEN_PAGE_SIZE) % PAGE_SIZE == 0).
>
> how can we have nr not covering full PAGE_SIZEs? If you are using 64K
> pages, how can you map, say, only 4K (if nr==1)?
>
> -boris
>
>>
>> I think the BUILD_BUG_ON in privcmd_ioctl_mmap_batch ensure that all
>> the intermediate batch will always be an integral number of PAGE_SIZE.
>>
>> Cheers,
>>
>

--
Julien Grall

\
 
 \ /
  Last update: 2017-06-12 00:50    [W:2.420 / U:0.012 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site