lkml.org 
[lkml]   [2009]   [Apr]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC PATCH 00/17] virtual-bus
Avi Kivity wrote:
> Gregory Haskins wrote:
>> Avi Kivity wrote:
>>
>>> Gregory Haskins wrote:
>>>
>>>> Avi Kivity wrote:
>>>>
>>>>
>>>>> My 'prohibitively expensive' is true only if you exit every packet.
>>>>>
>>>>>
>>>>>
>>>> Understood, but yet you need to do this if you want something like
>>>> iSCSI
>>>> READ transactions to have as low-latency as possible.
>>>>
>>> Dunno, two microseconds is too much? The wire imposes much more.
>>>
>>>
>>
>> No, but thats not what we are talking about. You said signaling on
>> every packet is prohibitively expensive. I am saying signaling on every
>> packet is required for decent latency. So is it prohibitively expensive
>> or not?
>>
>
> We're heading dangerously into the word-game area. Let's not do that.
>
> If you have a high throughput workload with many packets per seconds
> then an exit per packet (whether to userspace or to the kernel) is
> expensive. So you do exit mitigation. Latency is not important since
> the packets are going to sit in the output queue anyway.

Agreed. virtio-net currently does this with batching. I do with the
bidir napi thing (which effectively crosses the producer::consumer > 1
threshold to mitigate the signal path).


>
> If you have a request-response workload with the wire idle and latency
> critical, then there's no problem having an exit per packet because
> (a) there aren't that many packets and (b) the guest isn't doing any
> batching, so guest overhead will swamp the hypervisor overhead.
Right, so the trick is to use an algorithm that adapts here. Batching
solves the first case, but not the second. The bidir napi thing solves
both, but it does assume you have ample host processing power to run the
algorithm concurrently. This may or may not be suitable to all
applications, I admit.

>
> If you have a low latency request-response workload mixed with a high
> throughput workload, then you aren't going to get low latency since
> your low latency packets will sit on the queue behind the high
> throughput packets. You can fix that with multiqueue and then you're
> back to one of the scenarios above.
Agreed, and thats ok. Now we are getting more into 802.1p type MQ
issues anyway, if the application cared about it that much.

>
>> I think most would agree that adding 2us is not bad, but so far that is
>> an unproven theory that the IO path in question only adds 2us. And we
>> are not just looking at the rate at which we can enter and exit the
>> guest...we need the whole path...from the PIO kick to the dev_xmit() on
>> the egress hardware, to the ingress and rx-injection. This includes any
>> and all penalties associated with the path, even if they are imposed by
>> something like the design of tun-tap.
>>
>
> Correct, we need to look at the whole path. That's why the wishing
> well is clogged with my 'give me a better userspace interface' emails.
>
>> Right now its way way way worse than 2us. In fact, at my last reading
>> this was more like 3060us (3125-65). So shorten that 3125 to 67 (while
>> maintaining line-rate) and I will be impressed. Heck, shorten it to
>> 80us and I will be impressed.
>>
>
> The 3060us thing is a timer, not cpu time.
Agreed, but its still "state of the art" from an observer perspective.
The reason "why", though easily explainable, is inconsequential to most
people. FWIW, I have seen virtio-net do a much more respectable 350us
on an older version, so I know there is plenty of room for improvement.

> We aren't starting a JVM for each packet.
Heh...it kind of feels like that right now, so hopefully some
improvement will at least be on the one thing that comes out of all this.

-Greg

[unhandled content-type:application/pgp-signature]
\
 
 \ /
  Last update: 2009-04-02 16:25    [W:0.178 / U:0.392 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site