lkml.org 
[lkml]   [2019]   [Mar]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: [RFC][Patch v9 2/6] KVM: Enables the kernel to isolate guest free pages
    From
    Date
    On 13.03.19 12:54, Nitesh Narayan Lal wrote:
    >
    > On 3/12/19 5:13 PM, Alexander Duyck wrote:
    >> On Tue, Mar 12, 2019 at 12:46 PM Nitesh Narayan Lal <nitesh@redhat.com> wrote:
    >>> On 3/8/19 4:39 PM, Alexander Duyck wrote:
    >>>> On Fri, Mar 8, 2019 at 11:39 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote:
    >>>>> On 3/8/19 2:25 PM, Alexander Duyck wrote:
    >>>>>> On Fri, Mar 8, 2019 at 11:10 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote:
    >>>>>>> On 3/8/19 1:06 PM, Alexander Duyck wrote:
    >>>>>>>> On Thu, Mar 7, 2019 at 6:32 PM Michael S. Tsirkin <mst@redhat.com> wrote:
    >>>>>>>>> On Thu, Mar 07, 2019 at 02:35:53PM -0800, Alexander Duyck wrote:
    >>>>>>>>>> The only other thing I still want to try and see if I can do is to add
    >>>>>>>>>> a jiffies value to the page private data in the case of the buddy
    >>>>>>>>>> pages.
    >>>>>>>>> Actually there's one extra thing I think we should do, and that is make
    >>>>>>>>> sure we do not leave less than X% off the free memory at a time.
    >>>>>>>>> This way chances of triggering an OOM are lower.
    >>>>>>>> If nothing else we could probably look at doing a watermark of some
    >>>>>>>> sort so we have to have X amount of memory free but not hinted before
    >>>>>>>> we will start providing the hints. It would just be a matter of
    >>>>>>>> tracking how much memory we have hinted on versus the amount of memory
    >>>>>>>> that has been pulled from that pool.
    >>>>>>> This is to avoid false OOM in the guest?
    >>>>>> Partially, though it would still be possible. Basically it would just
    >>>>>> be a way of determining when we have hinted "enough". Basically it
    >>>>>> doesn't do us much good to be hinting on free memory if the guest is
    >>>>>> already constrained and just going to reallocate the memory shortly
    >>>>>> after we hinted on it. The idea is with a watermark we can avoid
    >>>>>> hinting until we start having pages that are actually going to stay
    >>>>>> free for a while.
    >>>>>>
    >>>>>>>> It is another reason why we
    >>>>>>>> probably want a bit in the buddy pages somewhere to indicate if a page
    >>>>>>>> has been hinted or not as we can then use that to determine if we have
    >>>>>>>> to account for it in the statistics.
    >>>>>>> The one benefit which I can see of having an explicit bit is that it
    >>>>>>> will help us to have a single hook away from the hot path within buddy
    >>>>>>> merging code (just like your arch_merge_page) and still avoid duplicate
    >>>>>>> hints while releasing pages.
    >>>>>>>
    >>>>>>> I still have to check PG_idle and PG_young which you mentioned but I
    >>>>>>> don't think we can reuse any existing bits.
    >>>>>> Those are bits that are already there for 64b. I think those exist in
    >>>>>> the page extension for 32b systems. If I am not mistaken they are only
    >>>>>> used in VMA mapped memory. What I was getting at is that those are the
    >>>>>> bits we could think about reusing.
    >>>>>>
    >>>>>>> If we really want to have something like a watermark, then can't we use
    >>>>>>> zone->free_pages before isolating to see how many free pages are there
    >>>>>>> and put a threshold on it? (__isolate_free_page() does a similar thing
    >>>>>>> but it does that on per request basis).
    >>>>>> Right. That is only part of it though since that tells you how many
    >>>>>> free pages are there. But how many of those free pages are hinted?
    >>>>>> That is the part we would need to track separately and then then
    >>>>>> compare to free_pages to determine if we need to start hinting on more
    >>>>>> memory or not.
    >>>>> Only pages which are isolated will be hinted, and once a page is
    >>>>> isolated it will not be counted in the zone free pages.
    >>>>> Feel free to correct me if I am wrong.
    >>>> You are correct up to here. When we isolate the page it isn't counted
    >>>> against the free pages. However after we complete the hint we end up
    >>>> taking it out of isolation and returning it to the "free" state, so it
    >>>> will be counted against the free pages.
    >>>>
    >>>>> If I am understanding it correctly you only want to hint the idle pages,
    >>>>> is that right?
    >>>> Getting back to the ideas from our earlier discussion, we had 3 stages
    >>>> for things. Free but not hinted, isolated due to hinting, and free and
    >>>> hinted. So what we would need to do is identify the size of the first
    >>>> pool that is free and not hinted by knowing the total number of free
    >>>> pages, and then subtract the size of the pages that are hinted and
    >>>> still free.
    >>> To summarize, for now, I think it makes sense to stick with the current
    >>> approach as this way we can avoid any locking in the allocation path and
    >>> reduce the number of hypercalls for a bunch of MAX_ORDER - 1 page.
    >> I'm not sure what you are talking about by "avoid any locking in the
    >> allocation path". Are you talking about the spin on idle bit, if so
    >> then yes.
    > Yeap!
    >> However I have been testing your patches and I was correct
    >> in the assumption that you forgot to handle the zone lock when you
    >> were freeing __free_one_page.
    > Yes, these are the steps other than the comments you provided in the
    > code. (One of them is to fix release_buddy_page())
    >> I just did a quick copy/paste from your
    >> zone lock handling from the guest_free_page_hinting function into the
    >> release_buddy_pages function and then I was able to enable multiple
    >> CPUs without any issues.
    >>
    >>> For the next step other than the comments received in the code and what
    >>> I mentioned in the cover email, I would like to do the following:
    >>> 1. Explore the watermark idea suggested by Alex and bring down memhog
    >>> execution time if possible.
    >> So there are a few things that are hurting us on the memhog test:
    >> 1. The current QEMU patch is only madvising 4K pages at a time, this
    >> is disabling THP and hurts the test.
    > Makes sense, thanks for pointing this out.
    >>
    >> 2. The fact that we madvise the pages away makes it so that we have to
    >> fault the page back in in order to use it for the memhog test. In
    >> order to avoid that penalty we may want to see if we can introduce
    >> some sort of "timeout" on the pages so that we are only hinting away
    >> old pages that have not been used for some period of time.
    >
    > Possibly using MADVISE_FREE should also help in this, I will try this as
    > well.

    I was asking myself some time ago how MADVISE_FREE will be handled in
    case of THP. Please let me know your findings :)

    --

    Thanks,

    David / dhildenb

    \
     
     \ /
      Last update: 2019-03-13 13:19    [W:4.445 / U:0.148 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site