Mailing List Archive

1 2  View All
Re: HYBRID: PV in HVM container [ In reply to ]
On Thu, 2011-07-28 at 12:34 +0100, Stefano Stabellini wrote:
> On Thu, 28 Jul 2011, Mukesh Rathor wrote:
> > Hi folks,
> >
> > Well, I did some benchmarking and found interesting results. Following
> > runs are on a westmere with 2 sockets and 10GB RAM. Xen was booted
> > with maxcpus=2 and entire RAM. All guests were started with 1vcpu and 2GB
> > RAM. dom0 started with 1 vcpu and 704MB. Baremetal was booted with 2GB
> > and 1 cpu. HVM guest has EPT enabled. HT is on.
> >
> > So, unless the NUMA'ness interfered with results (using some memory on
> > remote socket), it appears HVM does very well. To the point that it
> > seems a hybrid is not going to be worth it. I am currently running
> > tests on a single socket system just to be sure.
> >
>
> The high level benchmarks I run to compare PV and PV on HVM guests show
> a very similar scenario.
>
> It is still worth having HYBRID guests (running with EPT?) in order to
> support dom0 in an HVM container one day not too far from now.

I think it is also worth bearing in mind that once we have basic support
for HYBRID we can begin looking at/measuring which hardware features
offer advantages to PV guests and enhancing the PV interfaces for use by
HYBRID guests etc. (i.e. make things truly hybrid PV+Hardware and not
just contained PV)

Also there are arguments to be made for HYBRID over PVHVM in terms of
ease of manageability (i.e. a lot of folks like the dom0-supplied kernel
idiom which PV enables), avoiding the need for a virtualised BIOS and
emulated boot paths, HYBRID can potentially give a best of both in the
trade off between standard-PV vs. HVM/PVHVM while also not needing a
QEMU process for each guest (which helps scalability and so on) etc. I
think HYBRID is worthwhile even if it is basically on-par with PVHVM for
some workloads.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: HYBRID: PV in HVM container [ In reply to ]
On 08/09/2011 01:54 AM, Ian Campbell wrote:
> Also there are arguments to be made for HYBRID over PVHVM in terms of
> ease of manageability (i.e. a lot of folks like the dom0-supplied kernel
> idiom which PV enables), avoiding the need for a virtualised BIOS and
> emulated boot paths, HYBRID can potentially give a best of both in the
> trade off between standard-PV vs. HVM/PVHVM while also not needing a
> QEMU process for each guest (which helps scalability and so on) etc. I
> think HYBRID is worthwhile even if it is basically on-par with PVHVM for
> some workloads.

And it's amazing how much stuff goes away when you can set CONFIG_PCI=n...

J

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: HYBRID: PV in HVM container [ In reply to ]
Alright, got hybrid with EPT numbers in now from my prototype, it needs
some perf work..

Attaching the diffs from my prototype. Linux: 2.6.39. Xen 4.0.2.


Processor, Processes - times in microseconds - smaller is better
------------------------------------------------------------------------------
Host OS Mhz null null open slct sig sig fork exec sh
call I/O stat clos TCP inst hndl proc proc proc
--------- ------------- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ----
PV Linux 2.6.39f 2639 0.65 0.88 2.14 4.59 3.77 0.79 3.62 535. 1294 3308
Hybrid Linux 2.6.39f 2639 0.13 0.21 0.89 1.96 3.08 0.24 1.10 529. 1294 3246
HVM Linux 2.6.39f 2639 0.12 0.21 0.64 1.76 3.04 0.24 3.37 113. 354. 1324
Baremetal Linux 2.6.39+ 2649 0.13 0.23 0.74 1.93 3.46 0.28 1.58 127. 386. 1434
HYB-EPT Linux 2.6.39f 2639 0.13 0.21 0.68 1.95 3.04 0.25 3.09 145. 452. 1542


Basic integer operations - times in nanoseconds - smaller is better
-------------------------------------------------------------------
Host OS intgr intgr intgr intgr intgr
bit add mul div mod
--------- ------------- ------ ------ ------ ------ ------
PV Linux 2.6.39f 0.3800 0.0100 0.1700 9.1000 9.0400
Hybrid Linux 2.6.39f 0.3800 0.0100 0.1700 9.1100 9.0300
HVM Linux 2.6.39f 0.3800 0.0100 0.1700 9.1100 9.0600
Baremetal Linux 2.6.39+ 0.3800 0.0100 0.1700 9.0600 8.9800
HYB-EPT Linux 2.6.39f 0.3800 0.0100 0.1700 9.1200 9.0500


Basic float operations - times in nanoseconds - smaller is better
-----------------------------------------------------------------
Host OS float float float float
add mul div bogo
--------- ------------- ------ ------ ------ ------
PV Linux 2.6.39f 1.1300 1.5200 5.6200 5.2900
Hybrid Linux 2.6.39f 1.1300 1.5200 5.6300 5.2900
HVM Linux 2.6.39f 1.1400 1.5200 5.6300 5.3000
Baremetal Linux 2.6.39+ 1.1300 1.5100 5.6000 5.2700
HYB-EPT Linux 2.6.39f 1.1400 1.5200 5.6300 5.3000


Basic double operations - times in nanoseconds - smaller is better
------------------------------------------------------------------
Host OS double double double double
add mul div bogo
--------- ------------- ------ ------ ------ ------
PV Linux 2.6.39f 1.1300 1.9000 8.6400 8.3200
Hybrid Linux 2.6.39f 1.1400 1.9000 8.6600 8.3200
HVM Linux 2.6.39f 1.1400 1.9000 8.6600 8.3300
Baremetal Linux 2.6.39+ 1.1300 1.8900 8.6100 8.2800
HYB-EPT Linux 2.6.39f 1.1400 1.9000 8.6600 8.3300


Context switching - times in microseconds - smaller is better
-------------------------------------------------------------------------
Host OS 2p/0K 2p/16K 2p/64K 8p/16K 8p/64K 16p/16K 16p/64K
ctxsw ctxsw ctxsw ctxsw ctxsw ctxsw ctxsw
--------- ------------- ------ ------ ------ ------ ------ ------- -------
PV Linux 2.6.39f 5.2800 5.7600 6.3600 6.3200 7.3600 6.69000 7.46000
Hybrid Linux 2.6.39f 4.9200 4.9300 5.2200 5.7600 6.9600 6.12000 7.31000
HVM Linux 2.6.39f 1.3100 1.2200 1.6200 1.9200 3.2600 2.23000 3.48000
Baremetal Linux 2.6.39+ 1.5500 1.4100 2.0600 2.2500 3.3900 2.44000 3.38000
HYB-EPT Linux 2.6.39f 3.2000 3.6100 4.1700 4.3600 6.1200 4.81000 6.20000


*Local* Communication latencies in microseconds - smaller is better
---------------------------------------------------------------------
Host OS 2p/0K Pipe AF UDP RPC/ TCP RPC/ TCP
ctxsw UNIX UDP TCP conn
--------- ------------- ----- ----- ---- ----- ----- ----- ----- ----
PV Linux 2.6.39f 5.280 16.6 21.3 25.9 33.7 34.7 41.8 87.
Hybrid Linux 2.6.39f 4.920 11.2 14.4 19.6 26.1 27.5 32.9 71.
HVM Linux 2.6.39f 1.310 4.416 6.15 9.386 14.8 15.8 20.1 45.
Baremetal Linux 2.6.39+ 1.550 4.625 7.34 14.3 19.8 21.4 26.4 66.
HYB-EPT Linux 2.6.39f 3.200 8.669 15.3 17.5 23.5 25.1 30.4 66.


File & VM system latencies in microseconds - smaller is better
-------------------------------------------------------------------------------
Host OS 0K File 10K File Mmap Prot Page 100fd
Create Delete Create Delete Latency Fault Fault selct
--------- ------------- ------ ------ ------ ------ ------- ----- ------- -----
PV Linux 2.6.39f 24.0K 0.746 3.55870 2.184
Hybrid Linux 2.6.39f 24.6K 0.238 4.00100 1.480
HVM Linux 2.6.39f 4716.0 0.202 0.96600 1.468
Baremetal Linux 2.6.39+ 6898.0 0.325 0.93610 1.620
HYB-EPT Linux 2.6.39f 5321.0 0.347 1.19510 1.480


*Local* Communication bandwidths in MB/s - bigger is better
-----------------------------------------------------------------------------
Host OS Pipe AF TCP File Mmap Bcopy Bcopy Mem Mem
UNIX reread reread (libc) (hand) read write
--------- ------------- ---- ---- ---- ------ ------ ------ ------ ---- -----
PV Linux 2.6.39f 1661 2081 1041 3293.3 5528.3 3106.6 2800.0 4472 5633.
Hybrid Linux 2.6.39f 1974 2450 1183 3481.5 5529.6 3114.9 2786.6 4470 5672.
HVM Linux 2.6.39f 3232 2929 1622 3541.3 5527.5 3077.1 2765.6 4453 5634.
Baremetal Linux 2.6.39+ 3320 2800 1666 3523.6 5578.9 3147.0 2841.6 4541 5752.
HYB-EPT Linux 2.6.39f 2104 2480 1231 3451.5 5503.4 3067.7 2751.0 4438 5636.


Memory latencies in nanoseconds - smaller is better
(WARNING - may not be correct, check graphs)
------------------------------------------------------------------------------
Host OS Mhz L1 $ L2 $ Main mem Rand mem Guesses
--------- ------------- --- ---- ---- -------- -------- -------
PV Linux 2.6.39f 2639 1.5160 5.9170 29.7 97.5
Hybrid Linux 2.6.39f 2639 1.5170 7.5000 29.7 97.4
HVM Linux 2.6.39f 2639 1.5190 4.0210 29.8 105.4
Baremetal Linux 2.6.39+ 2649 1.5090 3.8370 29.2 78.0
HYB-EPT Linux 2.6.39f 2639 1.5180 4.0060 29.9 109.9


thanks,
Mukesh


On Wed, 27 Jul 2011 18:58:28 -0700
Mukesh Rathor <mukesh.rathor@oracle.com> wrote:

> Hi folks,
>
> Well, I did some benchmarking and found interesting results. Following
> runs are on a westmere with 2 sockets and 10GB RAM. Xen was booted
> with maxcpus=2 and entire RAM. All guests were started with 1vcpu and
> 2GB RAM. dom0 started with 1 vcpu and 704MB. Baremetal was booted
> with 2GB and 1 cpu. HVM guest has EPT enabled. HT is on.
>
> So, unless the NUMA'ness interfered with results (using some memory
> on remote socket), it appears HVM does very well. To the point that it
> seems a hybrid is not going to be worth it. I am currently running
> tests on a single socket system just to be sure.
>
> I am attaching my diff's in case any one wants to see what I did. I
> used xen 4.0.2 and linux 2.6.39.
>
> thanks,
> Mukesh
>
> L M B E N C H 3 . 0 S U M M A R Y
>
> Processor, Processes - times in microseconds - smaller is better
> ------------------------------------------------------------------------------
Re: HYBRID: PV in HVM container [ In reply to ]
On Thu, 17 Nov 2011, Mukesh Rathor wrote:
> Alright, got hybrid with EPT numbers in now from my prototype, it needs
> some perf work..

Is HVM a PV on HVM guest or a pure HVM guest (no CONFIG_XEN)?


> Processor, Processes - times in microseconds - smaller is better
> ------------------------------------------------------------------------------
> Host OS Mhz null null open slct sig sig fork exec sh
> call I/O stat clos TCP inst hndl proc proc proc
> --------- ------------- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ----
> PV Linux 2.6.39f 2639 0.65 0.88 2.14 4.59 3.77 0.79 3.62 535. 1294 3308
> Hybrid Linux 2.6.39f 2639 0.13 0.21 0.89 1.96 3.08 0.24 1.10 529. 1294 3246
> HVM Linux 2.6.39f 2639 0.12 0.21 0.64 1.76 3.04 0.24 3.37 113. 354. 1324
> Baremetal Linux 2.6.39+ 2649 0.13 0.23 0.74 1.93 3.46 0.28 1.58 127. 386. 1434
> HYB-EPT Linux 2.6.39f 2639 0.13 0.21 0.68 1.95 3.04 0.25 3.09 145. 452. 1542

good, hybrid == HVM in this test

[...]


> Context switching - times in microseconds - smaller is better
> -------------------------------------------------------------------------
> Host OS 2p/0K 2p/16K 2p/64K 8p/16K 8p/64K 16p/16K 16p/64K
> ctxsw ctxsw ctxsw ctxsw ctxsw ctxsw ctxsw
> --------- ------------- ------ ------ ------ ------ ------ ------- -------
> PV Linux 2.6.39f 5.2800 5.7600 6.3600 6.3200 7.3600 6.69000 7.46000
> Hybrid Linux 2.6.39f 4.9200 4.9300 5.2200 5.7600 6.9600 6.12000 7.31000
> HVM Linux 2.6.39f 1.3100 1.2200 1.6200 1.9200 3.2600 2.23000 3.48000
> Baremetal Linux 2.6.39+ 1.5500 1.4100 2.0600 2.2500 3.3900 2.44000 3.38000
> HYB-EPT Linux 2.6.39f 3.2000 3.6100 4.1700 4.3600 6.1200 4.81000 6.20000

How is it possible that the HYB-EPT numbers here are so much worse than
HVM? Shouldn't they be the same as in the other tests?


> *Local* Communication latencies in microseconds - smaller is better
> ---------------------------------------------------------------------
> Host OS 2p/0K Pipe AF UDP RPC/ TCP RPC/ TCP
> ctxsw UNIX UDP TCP conn
> --------- ------------- ----- ----- ---- ----- ----- ----- ----- ----
> PV Linux 2.6.39f 5.280 16.6 21.3 25.9 33.7 34.7 41.8 87.
> Hybrid Linux 2.6.39f 4.920 11.2 14.4 19.6 26.1 27.5 32.9 71.
> HVM Linux 2.6.39f 1.310 4.416 6.15 9.386 14.8 15.8 20.1 45.
> Baremetal Linux 2.6.39+ 1.550 4.625 7.34 14.3 19.8 21.4 26.4 66.
> HYB-EPT Linux 2.6.39f 3.200 8.669 15.3 17.5 23.5 25.1 30.4 66.
>
> *Local* Communication bandwidths in MB/s - bigger is better
> -----------------------------------------------------------------------------
> Host OS Pipe AF TCP File Mmap Bcopy Bcopy Mem Mem
> UNIX reread reread (libc) (hand) read write
> --------- ------------- ---- ---- ---- ------ ------ ------ ------ ---- -----
> PV Linux 2.6.39f 1661 2081 1041 3293.3 5528.3 3106.6 2800.0 4472 5633.
> Hybrid Linux 2.6.39f 1974 2450 1183 3481.5 5529.6 3114.9 2786.6 4470 5672.
> HVM Linux 2.6.39f 3232 2929 1622 3541.3 5527.5 3077.1 2765.6 4453 5634.
> Baremetal Linux 2.6.39+ 3320 2800 1666 3523.6 5578.9 3147.0 2841.6 4541 5752.
> HYB-EPT Linux 2.6.39f 2104 2480 1231 3451.5 5503.4 3067.7 2751.0 4438 5636.

same on these two tests




> Attaching the diffs from my prototype. Linux: 2.6.39. Xen 4.0.2.

lin.diff:


> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index e3c6a06..53ceae0 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -110,7 +110,7 @@ struct shared_info *HYPERVISOR_shared_info = (void *)&xen_dummy_shared_info;
> *
> * 0: not available, 1: available
> */
> -static int have_vcpu_info_placement = 1;
> +static int have_vcpu_info_placement = 0;
>
> static void clamp_max_cpus(void)
> {
> @@ -195,6 +195,13 @@ static void __init xen_banner(void)
> printk(KERN_INFO "Xen version: %d.%d%s%s\n",
> version >> 16, version & 0xffff, extra.extraversion,
> xen_feature(XENFEAT_mmu_pt_update_preserve_ad) ? " (preserve-AD)" : "");
> +
> + if (xen_hybrid_domain()) {
> + printk(KERN_INFO "MUK: is MUK HYBRID domain....");
> + if (xen_feature(XENFEAT_auto_translated_physmap))
> + printk(KERN_INFO "with EPT...");
> + printk(KERN_INFO "\n");
> + }
> }
>
> static __read_mostly unsigned int cpuid_leaf1_edx_mask = ~0;
> @@ -222,8 +229,10 @@ static void xen_cpuid(unsigned int *ax, unsigned int *bx,
> maskebx = 0;
> break;
> }
> -
> - asm(XEN_EMULATE_PREFIX "cpuid"
> + if (xen_hybrid_domain()) {
> + native_cpuid(ax, bx, cx, dx);
> + } else
> + asm(XEN_EMULATE_PREFIX "cpuid"
> : "=a" (*ax),
> "=b" (*bx),
> "=c" (*cx),
> @@ -244,6 +253,7 @@ static __init void xen_init_cpuid_mask(void)
> ~((1 << X86_FEATURE_MCE) | /* disable MCE */
> (1 << X86_FEATURE_MCA) | /* disable MCA */
> (1 << X86_FEATURE_MTRR) | /* disable MTRR */
> + (1 << X86_FEATURE_PSE) | /* disable 2M pages */
> (1 << X86_FEATURE_ACC)); /* thermal monitoring */
>
> if (!xen_initial_domain())
> @@ -393,6 +403,10 @@ static void xen_load_gdt(const struct desc_ptr *dtr)
> make_lowmem_page_readonly(virt);
> }
>
> + if (xen_hybrid_domain()) {
> + native_load_gdt(dtr);
> + return;
> + }
> if (HYPERVISOR_set_gdt(frames, size / sizeof(struct desc_struct)))
> BUG();
> }
> @@ -431,6 +445,10 @@ static __init void xen_load_gdt_boot(const struct desc_ptr *dtr)
> frames[f] = mfn;
> }
>
> + if (xen_hybrid_domain()) {
> + native_load_gdt(dtr);
> + return;
> + }
> if (HYPERVISOR_set_gdt(frames, size / sizeof(struct desc_struct)))
> BUG();
> }
> @@ -849,9 +867,11 @@ void xen_setup_shared_info(void)
>
> HYPERVISOR_shared_info =
> (struct shared_info *)fix_to_virt(FIX_PARAVIRT_BOOTMAP);
> - } else
> + } else {
> HYPERVISOR_shared_info =
> (struct shared_info *)__va(xen_start_info->shared_info);
> + return;
> + }
>
> #ifndef CONFIG_SMP
> /* In UP this is as good a place as any to set up shared info */
> @@ -944,6 +964,71 @@ static const struct pv_init_ops xen_init_ops __initdata = {
> .patch = xen_patch,
> };
>
> +extern void native_iret(void);
> +extern void native_irq_enable_sysexit(void);
> +extern void native_usergs_sysret32(void);
> +extern void native_usergs_sysret64(void);
> +
> +static const struct pv_cpu_ops xen_hybrid_cpu_ops __initdata = {
> + .cpuid = xen_cpuid,
> + .set_debugreg = xen_set_debugreg,
> + .get_debugreg = xen_get_debugreg,
> +
> + .clts = xen_clts,
> +
> + .read_cr0 = xen_read_cr0,
> + .write_cr0 = xen_write_cr0,
> +
> + .read_cr4 = native_read_cr4,
> + .read_cr4_safe = native_read_cr4_safe,
> + .write_cr4 = native_write_cr4,
> +
> + .wbinvd = native_wbinvd,
> +
> + .read_msr = native_read_msr_safe,
> + .write_msr = native_write_msr_safe,
> + .read_tsc = native_read_tsc,
> + .read_pmc = native_read_pmc,
> +
> + .iret = native_iret,
> + .irq_enable_sysexit = native_irq_enable_sysexit,
> +#ifdef CONFIG_X86_64
> + .usergs_sysret32 = native_usergs_sysret32,
> + .usergs_sysret64 = native_usergs_sysret64,
> +#endif
> +
> + .load_tr_desc = native_load_tr_desc,
> + .set_ldt = native_set_ldt,
> + .load_gdt = native_load_gdt,
> + .load_idt = native_load_idt,
> + .load_tls = native_load_tls,
> +#ifdef CONFIG_X86_64
> + .load_gs_index = native_load_gs_index,
> +#endif
> +
> + .alloc_ldt = paravirt_nop,
> + .free_ldt = paravirt_nop,
> +
> + .store_gdt = native_store_gdt,
> + .store_idt = native_store_idt,
> + .store_tr = native_store_tr,
> +
> + .write_ldt_entry = native_write_ldt_entry,
> + .write_gdt_entry = native_write_gdt_entry,
> + .write_idt_entry = native_write_idt_entry,
> + .load_sp0 = native_load_sp0,
> +
> + .set_iopl_mask = native_set_iopl_mask,
> + .io_delay = xen_io_delay,
> +
> + /* Xen takes care of %gs when switching to usermode for us */
> + .swapgs = native_swapgs,
> +
> + .start_context_switch = paravirt_start_context_switch,
> + .end_context_switch = xen_end_context_switch,

why are you using the paravirt version of start_context_switch and
end_context_switch? Is this for the non-autotranslate version?


> +};
> +
> static const struct pv_cpu_ops xen_cpu_ops __initdata = {
> .cpuid = xen_cpuid,
>
> @@ -1010,6 +1095,11 @@ static const struct pv_apic_ops xen_apic_ops __initdata = {
> #endif
> };
>
> +static void __init xen_hybrid_override_autox_cpu_ops(void)
> +{
> + pv_cpu_ops.cpuid = xen_cpuid;
> +}
> +
> static void xen_reboot(int reason)
> {
> struct sched_shutdown r = { .reason = reason };
> @@ -1071,6 +1161,10 @@ static const struct machine_ops __initdata xen_machine_ops = {
> */
> static void __init xen_setup_stackprotector(void)
> {
> + if (xen_hybrid_domain()) {
> + switch_to_new_gdt(0);
> + return;
> + }
> pv_cpu_ops.write_gdt_entry = xen_write_gdt_entry_boot;
> pv_cpu_ops.load_gdt = xen_load_gdt_boot;
>
> @@ -1093,14 +1187,22 @@ asmlinkage void __init xen_start_kernel(void)
>
> xen_domain_type = XEN_PV_DOMAIN;
>
> + xen_setup_features();
> xen_setup_machphys_mapping();
>
> /* Install Xen paravirt ops */
> pv_info = xen_info;
> pv_init_ops = xen_init_ops;
> - pv_cpu_ops = xen_cpu_ops;
> pv_apic_ops = xen_apic_ops;
>
> + if (xen_hybrid_domain()) {
> + if (xen_feature(XENFEAT_auto_translated_physmap))
> + xen_hybrid_override_autox_cpu_ops();
> + else
> + pv_cpu_ops = xen_hybrid_cpu_ops;
> + } else
> + pv_cpu_ops = xen_cpu_ops;

[...]

> void __init xen_init_mmu_ops(void)
> {
> + memset(dummy_mapping, 0xff, PAGE_SIZE);
> + x86_init.paging.pagetable_setup_done = xen_pagetable_setup_done;
> +
> + if (xen_feature(XENFEAT_auto_translated_physmap))
> + return;
> +
> x86_init.mapping.pagetable_reserve = xen_mapping_pagetable_reserve;
> x86_init.paging.pagetable_setup_start = xen_pagetable_setup_start;
> - x86_init.paging.pagetable_setup_done = xen_pagetable_setup_done;
> - pv_mmu_ops = xen_mmu_ops;
> + pv_mmu_ops = xen_mmu_ops;
>
> - memset(dummy_mapping, 0xff, PAGE_SIZE);
> + if (xen_hybrid_domain()) /* hybrid without EPT, ie, pv paging. */
> + xen_hyb_override_mmu_ops();
> }
>
> /* Protected by xen_reservation_lock. */

So in theory HYB-EPT is running with native_cpu_ops and native_mmu_ops;
in this case I don't understand why the performances are lower than HVM.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: HYBRID: PV in HVM container [ In reply to ]
On Fri, 18 Nov 2011 12:21:19 +0000
Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> On Thu, 17 Nov 2011, Mukesh Rathor wrote:
> > Alright, got hybrid with EPT numbers in now from my prototype, it
> > needs some perf work..
>
> Is HVM a PV on HVM guest or a pure HVM guest (no CONFIG_XEN)?

PV on HVM.


> How is it possible that the HYB-EPT numbers here are so much worse
> than HVM? Shouldn't they be the same as in the other tests?

Yeah I know. I wondered that myself. I need to investigate.


> why are you using the paravirt version of start_context_switch and
> end_context_switch? Is this for the non-autotranslate version?

this is for non-autotranslate version.

>
> So in theory HYB-EPT is running with native_cpu_ops and
> native_mmu_ops; in this case I don't understand why the performances
> are lower than HVM.

Yup, same here. I'll have to investigate to see whats going on. Keep you
posted. I am working on SMP now, then I'll take a look.


thanks,
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

1 2  View All