lkml.org 
[lkml]   [2016]   [Apr]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCHv7 00/29] THP-enabled tmpfs/shmem using compound pages
From
On Sat, Apr 23, 2016 at 10:46 PM, Wincy Van <fanwenyi0529@gmail.com> wrote:
> On Wed, Apr 20, 2016 at 1:07 AM, Andres Lagar-Cavilla
> <andreslc@google.com> wrote:
>> Andrea, we provide the, ahem, adjustments to
>> transparent_hugepage_adjust. Rest assured we aggressively use mmu
>> notifiers with no further changes required.
>>
>> As in: zero changes have been required in the lifetime (years) of
>> kvm+huge tmpfs at Google, other than mod'ing
>> transparent_hugepage_adjust.
>
> We are using kvm + tmpfs to do qemu live upgrading, how does google
> use this memory model ?
> I think our pupose to use tmpfs may be the same.

Nothing our of the ordinary. Guest memory is an mmap of a tmpfs fd.
Huge tmpfs gives us naturally a great guest performance boost.
MAP_SHARED, and having guest memory persist any one given process, are
what drives us to use tmpfs.

Andres
>
> And huge tmpfs is a really good improvement for that.
>
>>
>> As noted by Paolo, the additions to transparent_hugepage_adjust could
>> be lifted outside of kvm (into shmem.c? maybe) for any consumer of
>> huge tmpfs with mmu notifiers.
>>
>
> Thanks,
> Wincy



--
Andres Lagar-Cavilla | Google Kernel Team | andreslc@google.com

\
 
 \ /
  Last update: 2016-04-25 15:41    [W:0.613 / U:0.220 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site