lkml.org 
[lkml]   [2010]   [May]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRE: Frontswap [PATCH 0/4] (was Transcendent Memory): overview
    > It's bad, but it's better than ooming.
    >
    > The same thing happens with vcpus: you run 10 guests on one core, if
    > they all wake up, your cpu is suddenly 10x slower and has 30000x
    > interrupt latency (30ms vs 1us, assuming 3ms timeslices). Your disks
    > become slower as well.
    >
    > It's worse with memory, so you try to swap as a last resort. However,
    > swap is still faster than a crashed guest.

    Your analogy only holds when the host administrator is either
    extremely greedy or stupid. My analogy only requires some
    statistical bad luck: Multiple guests with peaks and valleys
    of memory requirements happen to have their peaks align.

    > > Third, host swapping makes live migration much more difficult.
    > > Either the host swap disk must be accessible to all machines
    > > or data sitting on a local disk MUST be migrated along with
    > > RAM (which is not impossible but complicates live migration
    > > substantially).
    >
    > kvm does live migration with swapping, and has no special code to
    > integrate them.
    > :
    > Don't know about vmware, but kvm supports page sharing, swapping, and
    > live migration simultaneously.

    Hmmm... I'll bet I can break it pretty easily. I think the
    case you raised that you thought would cause host OOM'ing
    will cause kvm live migration to fail.

    Or maybe not... when a guest is in the middle of a live migration,
    I believe (in Xen), the entire guest memory allocation (possibly
    excluding ballooned-out pages) must be simultaneously in RAM briefly
    in BOTH the host and target machine. That is, live migration is
    not "pipelined". Is this also true of KVM? If so, your
    statement above is just waiting a corner case to break it.
    And if not, I expect you've got fault tolerance issues.

    > > If you talk to VMware customers (especially web-hosting services)
    > > that have attempted to use overcommit technologies that require
    > > host-swapping, you will find that they quickly become allergic
    > > to memory overcommit and turn it off. The end users (users of
    > > the VMs that inexplicably grind to a halt) complain loudly.
    > > As a result, RAM has become a bottleneck in many many systems,
    > > which ultimately reduces the utility of servers and the value
    > > of virtualization.
    >
    > Choosing the correct overcommit ratio is certainly not an easy task.
    > However, just hoping that memory will be available when you need it is
    > not a good solution.

    Choosing the _optimal_ overcommit ratio is impossible without a
    prescient knowledge of the workload in each guest. Hoping memory
    will be available is certainly not a good solution, but if memory
    is not available guest swapping is much better than host swapping.
    And making RAM usage as dynamic as possible and live migration
    as easy as possible are keys to maximizing the benefits (and
    limiting the problems) of virtualization.



    \
     
     \ /
      Last update: 2010-05-02 19:25    [W:0.030 / U:0.008 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site