lkml.org 
[lkml]   [2010]   [Oct]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [Xen-devel] Re: [PATCH 1/5] xen: events: use irq_alloc_desc(_at) instead of open-coding an IRQ allocator.
On Tue, 26 Oct 2010, Ian Campbell wrote:
> On Mon, 2010-10-25 at 19:02 +0100, Ian Campbell wrote:
> >
> >
> > > What do you see when you pass in a PCI device and say give the guest
> > 32 CPUs??
> >
> > I can try tomorrow and see, based on what you say above without
> > implementing what I described I suspect the answer will be "carnage".
>
> Actually, it looks like multi-vcpu is broken, I only see 1 regardless of
> how many I configured. It's not clear if this is breakage in Linus'
> tree, something I pulled in from one of Jeremy's, yours or Stefano's
> trees or some local pebcak. I'll investigate...

I found the bug, it was introduced by:

"xen: use vcpu_ops to setup cpu masks"

I have added the fix at the end of my branch and I am also appending the
fix here.

---


xen: initialize cpu masks for pv guests in xen_smp_init

Pv guests don't have ACPI and need the cpu masks to be set
correctly as early as possible so we call xen_fill_possible_map from
xen_smp_init.
On the other hand the initial domain supports ACPI so in this case we skip
xen_fill_possible_map and rely on it. However Xen might limit the number
of cpus usable by the domain, so we filter those masks during smp
initialization using the VCPUOP_is_up hypercall.
It is important that the filtering is done before
xen_setup_vcpu_info_placement.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index 1386767..834dfeb 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -28,6 +28,7 @@
#include <asm/xen/interface.h>
#include <asm/xen/hypercall.h>

+#include <xen/xen.h>
#include <xen/page.h>
#include <xen/events.h>

@@ -156,6 +157,25 @@ static void __init xen_fill_possible_map(void)
{
int i, rc;

+ if (xen_initial_domain())
+ return;
+
+ for (i = 0; i < nr_cpu_ids; i++) {
+ rc = HYPERVISOR_vcpu_op(VCPUOP_is_up, i, NULL);
+ if (rc >= 0) {
+ num_processors++;
+ set_cpu_possible(i, true);
+ }
+ }
+}
+
+static void __init xen_filter_cpu_maps(void)
+{
+ int i, rc;
+
+ if (!xen_initial_domain())
+ return;
+
num_processors = 0;
disabled_cpus = 0;
for (i = 0; i < nr_cpu_ids; i++) {
@@ -179,6 +199,7 @@ static void __init xen_smp_prepare_boot_cpu(void)
old memory can be recycled */
make_lowmem_page_readwrite(xen_initial_gdt);

+ xen_filter_cpu_maps();
xen_setup_vcpu_info_placement();
}

@@ -195,8 +216,6 @@ static void __init xen_smp_prepare_cpus(unsigned int max_cpus)
if (xen_smp_intr_init(0))
BUG();

- xen_fill_possible_map();
-
if (!alloc_cpumask_var(&xen_cpu_initialized_map, GFP_KERNEL))
panic("could not allocate xen_cpu_initialized_map\n");

@@ -487,5 +506,6 @@ static const struct smp_ops xen_smp_ops __initdata = {
void __init xen_smp_init(void)
{
smp_ops = xen_smp_ops;
+ xen_fill_possible_map();
xen_init_spinlocks();
}

\
 
 \ /
  Last update: 2010-10-26 21:53    [W:0.089 / U:2.452 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site