lkml.org 
[lkml]   [2019]   [Aug]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
SubjectRe: [PATCH 4.19 21/32] vhost_net: fix possible infinite loop
From
Date

On 2019/8/4 上午5:49, Pavel Machek wrote:
> Hi!
>
>> This makes it possible to trigger a infinite while..continue loop
>> through the co-opreation of two VMs like:
>>
>> 1) Malicious VM1 allocate 1 byte rx buffer and try to slow down the
>> vhost process as much as possible e.g using indirect descriptors or
>> other.
>> 2) Malicious VM2 generate packets to VM1 as fast as possible
>>
>> Fixing this by checking against weight at the end of RX and TX
>> loop. This also eliminate other similar cases when:
>>
>> - userspace is consuming the packets in the meanwhile
>> - theoretical TOCTOU attack if guest moving avail index back and forth
>> to hit the continue after vhost find guest just add new buffers
>>
>> This addresses CVE-2019-3900.
>>
>> @@ -551,7 +551,7 @@ static void handle_tx_copy(struct vhost_
>> int err;
>> int sent_pkts = 0;
>>
>> - for (;;) {
>> + do {
>> bool busyloop_intr = false;
>>
>> head = get_tx_bufs(net, nvq, &msg, &out, &in, &len,
>> @@ -592,9 +592,7 @@ static void handle_tx_copy(struct vhost_
>> err, len);
>> if (++nvq->done_idx >= VHOST_NET_BATCH)
>> vhost_net_signal_used(nvq);
>> - if (vhost_exceeds_weight(vq, ++sent_pkts, total_len))
>> - break;
>> - }
>> + } while (likely(!vhost_exceeds_weight(vq, ++sent_pkts, total_len)));
>>
>> vhost_net_signal_used(nvq);
>> }
> So this part does not really change anything, right?


Nope, if you check the loop you can see we used to use "continue" inside
the loop which may bypass the check:


        head = get_tx_bufs(net, nvq, &msg, &out, &in, &len,
                   &busyloop_intr);
        /* On error, stop handling until the next kick. */
        if (unlikely(head < 0))
            break;
        /* Nothing new?  Wait for eventfd to tell us they refilled. */
        if (head == vq->num) {
            if (unlikely(busyloop_intr)) {
                vhost_poll_queue(&vq->poll);
            } else if (unlikely(vhost_enable_notify(&net->dev,
                                vq))) {
                vhost_disable_notify(&net->dev, vq);
                continue;
            }
            break;
        }


>
>> @@ -618,7 +616,7 @@ static void handle_tx_zerocopy(struct vh
>> bool zcopy_used;
>> int sent_pkts = 0;
>>
>> - for (;;) {
>> + do {
>> bool busyloop_intr;
>>
>> /* Release DMAs done buffers first */
>> @@ -693,10 +691,7 @@ static void handle_tx_zerocopy(struct vh
>> else
>> vhost_zerocopy_signal_used(net, vq);
>> vhost_net_tx_packet(net);
>> - if (unlikely(vhost_exceeds_weight(vq, ++sent_pkts,
>> - total_len)))
>> - break;
>> - }
>> + } while (likely(!vhost_exceeds_weight(vq, ++sent_pkts, total_len)));
>> }
>>
>> /* Expects to be always run from workqueue - which acts as
> Neither does this. Equivalent code. Changelog says it fixes something
> for the transmit so... is that intentional?
>
> Pavel


The same as above. So yes.

Thanks

\
 
 \ /
  Last update: 2019-08-05 06:18    [W:0.113 / U:0.108 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site