lkml.org 
[lkml]   [2018]   [Aug]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 1/1] drm/ttm: Set memory as decrypted for ttm framebuffer mappings
From
Date
Am 24.08.2018 um 01:05 schrieb Jiandi An:
>
>
> On 08/23/2018 01:47 AM, Christian König wrote:
>> Am 22.08.2018 um 22:57 schrieb Jiandi An:
>>>
>>> On 08/22/2018 02:09 PM, Christian König wrote:
>>>> Am 22.08.2018 um 20:57 schrieb Jiandi An:
>>>>> Framebuffer memory needs to be accessed decrypted.  Ensure the
>>>>> memory encryption mask is not set for the ttm framebuffer mappings.
>>>> NAK, the memory not only needs to be decrypted while CPU accessed
>>>> but all the time.
>>>>
>>>> ttm_page_alloc.c and ttm_page_alloc_dma.c should already take care
>>>> of that while mapping the pages.
>>>>
>>>> Regards,
>>>> Christian.
>>>>
>>> The path where the issue comes in is booting guest with AMD SEV
>>> enabled while using virtio-vga device.
>>> The ttm->pages doesn't go through ttm_populate_and_map_pages().
>>
>> And that is the point where you need to jump in and fix the problem.
>>
>> When a driver implements the populate() and unpopulate() callbacks
>> themselves it needs to take care of all that handling.
> Thanks for the suggestion and guidance.  So you want me to register
> the callbacks something like virtio_gpu_ttm_populate() and
> virtio_gpu_ttm_unpopulate() in the virtio_gpu_bo_driver,

Yes, that is one possibility. The alternative is directly hack that into
the ttm_tt_populate() and ttm_tt_unpopulate() functions.

> and perform mapping by calling ttm_bo_kmap()?  Then bring setting
> memory decrypted/encryped by calling
> set_memory_decrypted()/set_memory_encrypted() outside of
> ttm_bo_kmap(), do it within virtio_gpu_ttm_populate() and
> virtio_gpu_ttm_unpopulate() callbacks?

No, don't call ttm_bo_kmap() here. I think I now understand where we
have a misunderstanding.

ttm_bo_kmap() just creates a vmap of the original pages to make them
look linear to the kernel space.

What you need to do instead is to change the attributes of the original
page mapping, not the copy in the vmap.

In other words you should call  set_memory_decrypted() for each
individually allocated page.

>
> Then get rid of the separate call of virtio_gpu_vmap_fb() the
> virtio_gpu driver does?
>
>>
>>>
>>> In the kernel path, the virtio_gpu driver calls
>>> virtio_gpu_alloc_object() and goes down to ttm to
>>> allocate pages in ttm_pool_populate().  Driver in guest then does
>>> virtio_gpu_vmap_fb() which goes
>>> down to ttm_bo_kmap_ttm() and does a vmap() for GVA to GPA for those
>>> pages.  If SEV is enabled,
>>> decryption should be set in this GVA to GPA mapping.
>>
>> That assumption is what is incorrect here.
>>
>> TTM can't work with encrypted pages, so you need to set the pages as
>> decrypted directly after calling ttm_pool_populate().
>>
>> And obviously set it to encrypted again before calling
>> ttm_pool_unpopulate().
>>
>> Probably best if we add that to ttm_tt_populate()/ttm_tt_unpopulate().
>
> Here when you say ttm_tt_populate() and ttm_tt_unpopulate() you mean
> the virtio_gpu_ttm_populate() and virtio_gpu_ttm_unpopulate() that I
> register in virtio_gpu_bo_driver right?

Either as driver specific callbacks or directly in the fallback path of
ttm_tt_populate() and ttm_tt_unpopulate().

Regards,
Christian.

>
>>
>> Regards,
>> Christian.
>>
>>>    Guest then sends object attach command to host
>>> via virtio_gpu_object_attach() which stuffs the pages' physical
>>> addresses (guest physical address)
>>> in a scatter list and send them to host QEMU. Host QEMU maps those
>>> guest pages GPA to HPA and access
>>> via HVA while guest write stuff in those pages via GVA.
>>>
>>> virtio_gpu_alloc_object()
>>>     virtio_gpu_object_create()
>>>        sturct virtio_gpu_object bo kzalloc()
>>>           ttm_bo_init()
>>>           ...
>>>              ttm_tt_create()
>>>                 bo->ttm = bdev->driver->ttm_tt_create()
>>>                     virtio_gpu_ttm_tt_create()
>>>                     ...
>>>                        ttm_tt_populate()
>>>                           ttm_pool_populate()
>>>                              ttm_get_pages(ttm->pages, ttm->num_pages)
>>>
>>> virtio_gpu_vmap_fb()
>>>     virtio_gpu_object_kmap(obj, NULL)
>>>        ttm_bo_kmap
>>>           ttm_mem_io_reserve()
>>>              ttm_bo_kmap_ttm()
>>>                 vmap()
>>>              struct virtio_gpu_object bo->vmap =
>>> ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
>>>
>>>
>>> virtio_gpu_object_attach() <<== Sending attach backing command
>>>     virtio_gpu_object_get_sg_table()
>>>        if (bo->tbo.ttm->state == tt_unpopulated)
>>>           virtio_gpu_object
>>> bo.ttm->bdev->driver->ttm_tt_populate(bo->tbo.ttm, &ctx);
>>>              bo->pages = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
>>> sg_alloc_table_from_pages(bo->tbo.ttm->pages)  //Getfrom ttm->pages
>>> and put in sg
>>>                 __sg_alloc_table_from_pages()
>>>
>>>
>>> There is a separate userspace path for example when displace manager
>>> kicks in,
>>> virtio_gpu_alloc_object() followed by virtio_gpu_object_attach() are
>>> called through
>>> the ioctl virtio_gpu_resource_create_ioctl().  The mapping of the
>>> pages created in this
>>> page are handled in the do_page_fault() path in ttm_bo_vm_ops's
>>> ttm_bo_vm_fault() handler.
>>>
>>> do_page_fault()
>>>     handle_mm_fault()
>>>        __do_page_fault()
>>>           ttm_bo_vm_fault()
>>>              ttm_bo_reserve()
>>>                 __ttm_bo_reserve()
>>>
>>>              ttm_mem_io_reserve_vm()
>>>                 ttm_mem_io_reserve()
>>>                    bdev->driver->io_mem_reserve()
>>>                       virtio_gpu_ttm_io_mem_reserve()
>>>                          mem->bus.is_iomem = false
>>>                          mem->mem_type == TTM_PL_TT
>>>
>>>
>>> I originally handled setting pages decrypted for the kernel path for
>>> GVA to GPA mapping in guest
>>> in virtio_gpu_vmap_fb() at the virtio_gpu driver layer.  But for the
>>> user space path
>>> virtio_gpu_vmap_fb() is not called.  Mapping is handled in the
>>> page_fault handler.  So I moved it
>>> to the ttm layer.
>>>
>>> What layer do you recommend as more appropriate to handle setting
>>> memory decrypted for GVA to GPA
>>> mapping for thos ttm->pages?
>>>
>>> Thanks
>>> Jiandi
>>>
>>>>> Signed-off-by: Jiandi An <jiandi.an@amd.com>
>>>>> ---
>>>>>    drivers/gpu/drm/ttm/ttm_bo_util.c | 12 +++++++++++-
>>>>>    drivers/gpu/drm/ttm/ttm_bo_vm.c   |  6 ++++--
>>>>>    2 files changed, 15 insertions(+), 3 deletions(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>>> b/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>>> index 046a6dda690a..b3f5d26f571e 100644
>>>>> --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>>> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>>> @@ -29,6 +29,7 @@
>>>>>     * Authors: Thomas Hellstrom <thellstrom-at-vmware-dot-com>
>>>>>     */
>>>>>    +#include <asm/set_memory.h>
>>>>>    #include <drm/ttm/ttm_bo_driver.h>
>>>>>    #include <drm/ttm/ttm_placement.h>
>>>>>    #include <drm/drm_vma_manager.h>
>>>>> @@ -639,7 +640,11 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
>>>>>        if (ret)
>>>>>            return ret;
>>>>>        if (!bo->mem.bus.is_iomem) {
>>>>> -        return ttm_bo_kmap_ttm(bo, start_page, num_pages, map);
>>>>> +        ret = ttm_bo_kmap_ttm(bo, start_page, num_pages, map);
>>>>> +        if (!ret && sev_active())
>>>>> +            set_memory_decrypted((unsigned long) map->virtual,
>>>>> +                         num_pages);
>>>>> +        return ret;
>>>>>        } else {
>>>>>            offset = start_page << PAGE_SHIFT;
>>>>>            size = num_pages << PAGE_SHIFT;
>>>>> @@ -661,9 +666,14 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
>>>>>            iounmap(map->virtual);
>>>>>            break;
>>>>>        case ttm_bo_map_vmap:
>>>>> +        if (sev_active())
>>>>> +            set_memory_encrypted((unsigned long) map->virtual,
>>>>> +                         bo->num_pages);
>>>>>            vunmap(map->virtual);
>>>>>            break;
>>>>>        case ttm_bo_map_kmap:
>>>>> +        if (sev_active())
>>>>> +            set_memory_encrypted((unsigned long) map->virtual, 1);
>>>>>            kunmap(map->page);
>>>>>            break;
>>>>>        case ttm_bo_map_premapped:
>>>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c
>>>>> b/drivers/gpu/drm/ttm/ttm_bo_vm.c
>>>>> index 6fe91c1b692d..211d3549fd9f 100644
>>>>> --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
>>>>> +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
>>>>> @@ -249,10 +249,12 @@ static vm_fault_t ttm_bo_vm_fault(struct
>>>>> vm_fault *vmf)
>>>>>         * Speculatively prefault a number of pages. Only error on
>>>>>         * first page.
>>>>>         */
>>>>> +
>>>>> +    /* Mark framebuffer pages decrypted */
>>>>> +    cvma.vm_page_prot = pgprot_decrypted(cvma.vm_page_prot);
>>>>> +
>>>>>        for (i = 0; i < TTM_BO_VM_NUM_PREFAULT; ++i) {
>>>>>            if (bo->mem.bus.is_iomem) {
>>>>> -            /* Iomem should not be marked encrypted */
>>>>> -            cvma.vm_page_prot = pgprot_decrypted(cvma.vm_page_prot);
>>>>>                pfn = ttm_bo_io_mem_pfn(bo, page_offset);
>>>>>            } else {
>>>>>                page = ttm->pages[page_offset];
>>
>

\
 
 \ /
  Last update: 2018-08-24 09:23    [W:0.070 / U:0.848 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site