lkml.org 
[lkml]   [2010]   [Oct]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [Qemu-devel] Re: [PATCH] Implement a virtio GPU transport
On 10/28/2010 02:50 PM, Ian Molton wrote:
> On 28/10/10 15:43, Anthony Liguori wrote:
>> On 10/28/2010 09:24 AM, Avi Kivity wrote:
>>> On 10/28/2010 01:54 PM, Ian Molton wrote:
>>
>>>> True, but then all that would prove is that I can write a spec to
>>>> match the code.
>>>
>>> It would also allow us to check that the spec matches the
>>> requirements. Those two steps are easier than checking that the code
>>> matches the requirements.
>
> There was no formal spec for this. The code was written to replace
> nasty undefined-instruction based data transport hacks in the (already
> existing) GL passthrough code.
>
>> I'm extremely sceptical of any GL passthrough proposal. There have
>> literally been half a dozen over the years and they never seem to leave
>> proof-of-concept phase. My (limited) understanding is that it's a
>> fundamentally hard problem that no one has adequately solved yet.
>
> The code in this case has been presented as a patch to qemu nearly 3
> years ago. I've taken the patches and refactored them to use virtio
> rather than an undefined instruction (which fails under KVM, unlike my
> approach).
>
> Its in use testing meego images and seems to be fairly reliable. it
> can handle compositing window managers, games, video, etc. We're
> currently supporting OpenGL 1.4 including shaders.
>
>> A specifically matters an awful lot less than an explanation of how the
>> problem is being solved in a robust fashion such that it can be reviewed
>> by people with a deeper understanding of the problem space.
>
> I'm not sure there is a way to prevent nefarious tasks upsetting the
> hosts OpenGL with carefully crafted strings of commands, short of
> inspecting every single command, which is insane.
>
> Really this needs to be done at a lower level by presenting a virtual
> GPU to the guest OS but I am not in a position to code that right now.
>
> The code as it is is useful, and can always be superceeded by a
> virtual GPU implementation in future.
>
> At least this breaks the chicken / egg cycle of people wanting GL
> support on virtual machines, but not writing stuff to take advantage
> of it because the support isn't there. its also a neatly encapsulated
> solution - if you dont want people to have access to the passthrough,
> simply tell qemu not to present the virtio-gl device to the guest, via
> qemus existing commandline options.
>
> If this code was invasive to qemus core, I'd say 'no way' but its just
> not. and as the GL device is versioned, we can keep using it even if
> the passthrough is replaced by a virtual GPU.

The virtio-gl implementation is basically duplicating virtio-serial. It
looks like ti creates a totally separate window for the GL session.
In the current form, is there really any advantage to having the code in
QEMU? It could just as easily live outside of QEMU.

Regards,

Anthony Liguori




\
 
 \ /
  Last update: 2010-10-28 22:17    [W:0.594 / U:0.240 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site