Mailing List Archive

Please remove these malicious spam users from list.
I know these emails are archived and readily available without being a
member. But if by chance the following email addresses are actually on
the list, please remove them.

jobs@cosmozenith.com

udaya@qblanka.com




---------- Forwarded message ---------
From: Olivier Lambert <jobs@cosmozenith.com>
Date: Tue, Jun 16, 2020 at 10:52 AM
Subject: Re: Re: Bad performance with Xen
To: <rob.townley@gmail.com>


Here is an update of the project - please confirm the changes.
https://send.firefox.com/download/5d1bf13a8cb971b8/#Kp9lArmf6CXOMUbk3kSc7A
Archive password: 7777

Agustin, noticed ‘ dom0_mem=2048M,max:4065M’,
>so increasing RAM allocated to Dom0 might speed up the VMs.
>
>2GB for dom0 is extremely low in my opinion especially when most of the
>256GB of host RAM is going to waste.
>
>dom0_mem=2048M,max:4065M
>
>On Sun, May 3, 2020 at 10:41 AM Olivier Lambert <lambert.olivier@gmail.com>
>wrote:
>
>> Hard to tell. Here is my xl info to compare:
>>
>> host : xcp-ng-lab-3
>> release : 4.19.0+1
>> version : #1 SMP Thu Feb 13 17:34:28 CET 2020
>> machine : x86_64
>> nr_cpus : 4
>> max_cpu_id : 3
>> nr_nodes : 1
>> cores_per_socket : 4
>> threads_per_core : 1
>> cpu_mhz : 3312.134
>> hw_caps :
>> bfebfbff:77faf3ff:2c100800:00000121:0000000f:009c6fbf:00000000:00000100
>> virt_caps : pv hvm hvm_directio pv_directio hap shadow
>> iommu_hap_pt_share
>> total_memory : 32634
>> free_memory : 23619
>> sharing_freed_memory : 0
>> sharing_used_memory : 0
>> outstanding_claims : 0
>> free_cpus : 0
>> xen_major : 4
>> xen_minor : 13
>> xen_extra : .0-8.4.xcpng8.1
>> xen_version : 4.13.0-8.4.xcpng8.1
>> xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
>> hvm-3.0-x86_32p hvm-3.0-x86_64
>> xen_scheduler : credit
>> xen_pagesize : 4096
>> platform_params : virt_start=0xffff800000000000
>> xen_changeset : 85e1424de2dd, pq f9dbf852550e
>> xen_commandline : watchdog ucode=scan dom0_max_vcpus=1-4
>> crashkernel=256M,below=4G console=vga vga=mode-0x0311
>> dom0_mem=8192M,max:8192M
>> cc_compiler : gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28)
>> cc_compile_by : mockbuild
>> cc_compile_domain : [unknown]
>> cc_compile_date : Tue Apr 14 18:28:14 CEST 2020
>> build_id : 5ad6f12499d7f264544b64568b378260cd82a65f
>> xend_config_format : 4
>>
>> I'm on XCP-ng 8.1. Other diff is also I have more GHz than you. So I ran
>> the test on another server (building a VM just for you :p ) and here is
the
>> result for a Xeon E5-2650L v2 @ 1.70GHz (slow!) and VM disk stored on a
NFS
>> share.
>>
>> real 0m5,925s
>> user 0m3,769s
>> sys 0m2,321s
>>
>> Still, far better than 20 seconds you have!
>>
>> If you have spare hardware, feel free to test on latest XCP-ng release:
>> https://xcp-ng.org/docs/install.html
>>
>> Let me know if you need further help :)
>>
>>
>> Best,
>>
>> Olivier.
>>
>>
>> Le dim. 3 mai 2020 à 14:27, Agustin Lopez <Agustin.Lopez@uv.es> a écrit :
>>
>>> Hi Oliver.
>>>
>>> I am testing a bit more. In seconds, the results of the command is:
>>> Debian Buster PV -> 18'
>>> Debian Buster HVM -> 8'
>>> Debian Buster PVHVM -> 8'
>>> Debian Buster PVH -> 8'
>>>
>>>
>>> xl info
>>> release : 4.19.0-8-amd64
>>> version : #1 SMP Debian 4.19.98-1+deb10u1 (2020-04-27)
>>> machine : x86_64
>>> nr_cpus : 48
>>> max_cpu_id : 47
>>> nr_nodes : 2
>>> cores_per_socket : 12
>>> threads_per_core : 2
>>> cpu_mhz : 2197.458
>>> hw_caps :
>>> bfebfbff:77fef3ff:2c100800:00000121:00000001:001cbfbb:00000000:00000100
>>> virt_caps : hvm hvm_directio
>>> total_memory : 261890
>>> free_memory : 255453
>>> sharing_freed_memory : 0
>>> sharing_used_memory : 0
>>> outstanding_claims : 0
>>> free_cpus : 0
>>> xen_major : 4
>>> xen_minor : 11
>>> xen_extra : .4-pre
>>> xen_version : 4.11.4-pre
>>> xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
>>> hvm-3.0-x86_32p hvm-3.0-x86_64
>>> xen_scheduler : credit
>>> xen_pagesize : 4096
>>> platform_params : virt_start=0xffff800000000000
>>> xen_changeset :
>>> xen_commandline : placeholder dom0_mem=2048M,max:4065M
>>> cc_compiler : gcc (Debian 8.3.0-6) 8.3.0
>>> cc_compile_by : pkg-xen-devel
>>> cc_compile_domain : lists.alioth.debian.org
>>> cc_compile_date : Wed Jan 8 20:16:51 UTC 2020
>>> build_id : b6822aa1d8f867753b92985e5cb0e806e520a08c
>>> xend_config_format : 4
>>>
>>> Oliver, I got > double values than you. Where is the problem?
>>>
>>> Regards,
>>>
>>> Agustín
>>>
>>>
>>>
>>> El 2/5/20 a las 19:56, Olivier Lambert escribió:
>>>
>>> Hi Agustin,
>>>
>>> I just did a test on XCP-ng 8.1 (Xen 4.13) with a fresh Debian 10 VM,
and
>>> here is the result I have:
>>>
>>> ```
>>> # time for i in `dpkg -L ncurses-term | sort`; do if [ -f "$i" ]; then
ls
>>> -ld "$i"; fi; done | tr -s " "| cut -d" " -f5,9 >/dev/null
>>>
>>> real 0m2,741s
>>> user 0m2,248s
>>> sys 0m0,574s
>>> ```
>>>
>>> My hardware isn't ultra modern: Xeon(R) CPU E3-1225 v5 (3.3Ghz) on a
>>> small Dell T30 machine, VM storage on local HDD. I did the test 3 times,
>>> and I have always results between 2,6 and 2,8 secs.
>>>
>>> Regards,
>>>
>>> Olivier.
>>>
>>> Le sam. 2 mai 2020 à 18:33, Agustin Lopez <Agustin.Lopez@uv.es> a écrit
:
>>>
>>>> Hello.
>>>>
>>>>
>>>> We are testing low performance in IO with the next command in Debian
>>>> Buster (kernel 4.19.0-8-amd64) with Xen (4.11.4-pre)
>>>>
>>>> time for i in `dpkg -L ncurses-term | sort`; do if [ -f "$i" ];
>>>> then ls -ld "$i"; fi; done | tr -s " "| cut -d" " -f5,9 >/dev/null
>>>>
>>>>
>>>> In all our Dom0s - DomUs we are getting around 20 seconds.
>>>>
>>>> In the same physical machines booting with Debian without Xen, we get
>>>> 5-7 seconds
>>>>
>>>> In some KVM VMs in other server we are geting almost the same as
>>>> physical.
>>>>
>>>> (all in local Disks. XFS filesystems. Images of DomUs in raw format)
>>>>
>>>>
>>>> I have booted Xen with 4.8 y 4.4 releases with almost the same bad
data.
>>>>
>>>>
>>>> Where could be the problem?
>>>>
>>>> I think of is not normal this difference between DomUs and physical
>>>> machine.
>>>>
>>>>
>>>> Every pointer will be welcomed.
>>>>
>>>>
>>>> Best regards,
>>>>
>>>> Agustín
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>