lkml.org 
[lkml]   [2017]   [May]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 08/18] xen/pvcalls: implement connect command
From
Date
On 05/18/2017 03:10 PM, Stefano Stabellini wrote:
> On Tue, 16 May 2017, Boris Ostrovsky wrote:
>>>>> + ret = xenbus_map_ring_valloc(dev, &req->u.connect.ref, 1, &page);
>>>>> + if (ret < 0) {
>>>>> + sock_release(map->sock);
>>>>> + kfree(map);
>>>>> + goto out;
>>>>> + }
>>>>> + map->ring = page;
>>>>> + map->ring_order = map->ring->ring_order;
>>>>> + /* first read the order, then map the data ring */
>>>>> + virt_rmb();
>>>> Not sure I understand what the barrier is for here. I don't think compiler
>>>> will reorder ring_order access with the call.
>>> It's to avoid using the live version of ring_order to map the data ring
>>> pages (the other end could be changing that value at any time). We want
>>> to be sure that the compiler doesn't optimize out map->ring_order and
>>> use map->ring->ring_order instead.
>> Wouldn't WRITE_ONCE(map->ring_order, map->ring->ring_order) be the right
>> primitive then?
> It doesn't have to be atomic, because right after the assignment we
> check if map->ring_order is an appropriate value (see below).

WRITE_ONCE() is not about atomicity, it's about not allowing compilers
get too aggressive.

>
>
>> And also: if the other side changes ring size, what are we mapping then?
>> It's obsolete by now.
> If the grants are wrong, the mapping hypercalls will fail, the same way
> they do with any of the other PV frontends/backends today. That is not
> the problem we are trying to address with the barrier.
>
> The issue is here is that by runtime changes to map->ring->ring_order,
> the frontend could issue a denial of service by getting the backend into
> a busyloop. You can imagine that:
>
> for (i = 0; i < map->ring->ring_order; i++) {
>
> might not work as the backend expects if map->ring->ring_order can
> change at any time.
>
> One could say that the code is already written this way:
>
> for (i = 0; i < map->ring_order; i++) {
>
> So what's the problem? We have seen instances in the past of the
> compiler "optimizing" things in a way that actually the assembly did:
>
> for (i = 0; i < map->ring->ring_order; i++) {
>
> This is why I put a barrier there, to avoid such compiler
> "optimizations". Does it make sense?

Right, I understand all this. I thought you meant that changing
ring_order was part of normal operation (i.e. somewhat expected) and I
couldn't see how that would work.

Thanks for taking time to write this down.

-boris

>
>
>>>>> + if (map->ring_order > MAX_RING_ORDER) {
>>>>> + ret = -EFAULT;
>>>>> + goto out;
>>>>> + }
>>>> If the barrier is indeed needed this check belongs before it.
>>> I don't think so, see above.
>>>
>>>
>>>>> + ret = xenbus_map_ring_valloc(dev, map->ring->ref,
>>>>> + (1 << map->ring_order), &page);
>>>>> + if (ret < 0) {
>>>>> + sock_release(map->sock);
>>>>> + xenbus_unmap_ring_vfree(dev, map->ring);
>>>>> + kfree(map);
>>>>> + goto out;
>>>>> + }
>>>>> + map->bytes = page;
>>>>>

\
 
 \ /
  Last update: 2017-05-18 22:20    [W:0.078 / U:0.576 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site