lkml.org 
[lkml]   [2015]   [May]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 08/11] KVM: implement multiple address spaces
2015-05-20 09:07+0200, Paolo Bonzini:
> On 19/05/2015 20:28, Radim Krčmář wrote:
>>> The regular and SMM address spaces are not hierarchical. As soon as you
>>> put a PCI resource underneath SMRAM---which is exactly what happens for
>>> legacy VRAM at 0xa0000---they can be completely different. Note that
>>> QEMU can map legacy VRAM as a KVM memslot when using the VGA 320x200x256
>>> color mode (this mapping is not correct from the VGA point of view, but
>>> it cannot be changed in QEMU without breaking migration).
>>
>> How is a PCI resource under SMRAM accessed?
>> I thought that outside SMM, PCI resource under SMRAM is working
>> normally, but it will be overshadowed, and made inaccessible, in SMM.
>
> Yes, it is. (There is some chipset magic to make instruction fetches
> retrieve SMRAM and data fetches retrieve PCI resources. I guess you
> could use execute-only EPT permissions, but needless to say, we don't care).

Interesting, so that part of SMRAM is going to be useless for SMM data?
(Even worse, SMM will read and write to the PCI resource?)

>> I'm not sure if we mean the same hierarchy. I meant hierarchy in the
>> sense than one address space is considered before the other.
>> (Maybe layers would be a better word.)
>> SMM address space could have just one slot and be above regular, we'd
>> then decide how to handle overlapping.
>
> Ah, now I understand. That would be doable.
>
> But as they say, "All programming is an exercise in caching." In this
> case, the caching is done by userspace.

(It's not caching if we wanted a different result ;])

> QEMU implements the SMM address space exactly by overlaying SMRAM over
> normal memory:
| [...]
> The caching consists simply in resolving the overlaps beforehand, thus
> giving KVM the complete address space.
>
> Since slots do not change often, the simpler code is not worth the
> potentially more expensive KVM_SET_USER_MEMORY_REGION (it _is_ more
> expensive, if only because it has to be called twice per slot change).

I am a bit worried about the explosion that would happen if we wanted,
for example, per-VCPU address spaces; SMM would double their amount.

My main issue (orthogonal to layering) is that we don't allow a way to
let userspace tell us that some slots in different name spaces are the
same slot. We're losing information that could be useful in the future
(I can only think of less slot queries for dirty log now).

What I like about your solution is that it fits existing code really
well, is easily modified if needs change, and that it already exists.
All my ideas would require more code in kernel, which really doesn't
seem to be worth the benefits it would bring to the SMM use case ...

I'm ok with this approach,

Thanks.


\
 
 \ /
  Last update: 2015-05-20 18:01    [W:0.099 / U:0.400 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site