Messages in this thread | | | Date | Thu, 8 Mar 2012 10:35:45 +0900 | From | Takuya Yoshikawa <> | Subject | Re: [PATCH 3/4 changelog-v2] KVM: Switch to srcu-less get_dirty_log() |
| |
Marcelo Tosatti <mtosatti@redhat.com> wrote:
> What is worrying are large memory cases: think of the 50GB slot case. > 100ms hold time is pretty bad (and reacquiring the lock is relatively > simple). >
OK, I agree basically.
But let me explain one thing before deciding what I should do next.
With my method, even when we use a 50GB slot, the hold time will be under 10ms -- not 100ms -- if the memory actually updated from the last time is 1GB (256K dirty pages).
> > 8747274.0 10973.3 33.3 -31% -3% 256K Note that this unit-test was done on my tiny core-i3 32-bit host. On servers which can install more than 50GB memory, this will become much faster: actually my live migration tests done on Xeon saw much better numbers. Unit-test has been tuned for the worst case.
I admit that if the dirty memory size is more than 10GB, we may see over 100ms hold time you are worrying about.
For that I was proposing introducing a new GET_DIRTY_LOG API which can restrict the number of dirty pages we get the log - but this is a long term goal.
So, I am OK to try to introduce cond_resched_lock_cb() as you suggested. But, as I explained above, my current implementation does not introduce any real regression concerning to mmu_lock hold time:
Before we could see 10ms hold time in real workloads: > funcgraph_entry: ! 9783.060 us | kvm_mmu_slot_remove_write_access();
I have never seen ms hold time with my method in the same workloads.
So, it would be helpful if you can apply the patch series and I can work on top of that: although I cannot use servers with 100GB memory now, migrating a guest with 16GB memory or so may be possible later: I need to reserve servers for that.
I also want to know "mmu_lock -- TLB flush"-decoupling plan. We will not need to introduce cond_resched_lock_cb() in sched.h if that is possible.
Thanks, Takuya
| |