Mailing List Archive

Domain-0 CPU pinning - is it still a best practice?
Hello,

A lot of the information on
https://wiki.xenproject.org/wiki/Xen_Project_Best_Practices is very
old now (e.g. shows xm stack or commands that are no longer valid
for xl). Are people still dedicating CPU cores to dom0 in modern
versions of Xen with credit2 scheduler?

If I put this on the hypervisor command line:

dom0_max_vcpus=2 dom0_vcpus_pin

how do I know which PCPUs dom0 will be pinned to?

I am aware that after boot I can:

# xl vcpu-pin Domain-0 0 0 -
# xl vcpu-pin Domain-0 1 1 -

to pin Domain-0 to PCPUs 0 and 1.

And then potentially set:

cpus = "all,^0,^1"

in every guest config to tell them to use "all PCPUs except 0 or 1",
thus leaving PCPUs 0 and 1 exclusively used by Domain-0.

Cheers,
Andy
Re: Domain-0 CPU pinning - is it still a best practice? [ In reply to ]
> for xl). Are people still dedicating CPU cores to dom0 in modern
> versions of Xen with credit2 scheduler?
>
> If I put this on the hypervisor command line:
>
> dom0_max_vcpus=2 dom0_vcpus_pin
>
> how do I know which PCPUs dom0 will be pinned to?


I do this, but then I take it a little bit further, especially with NUMA hosts.


Let’s suppose I have a dual-socket 8-core CPU machine with HT enabled. 32 logical CPUs, right?

I use those hypervisor command line configs and some commands in rc.local to end up with the following:


# xl cpupool-list
Name CPUs Sched Active Domain count
Pool-dom0 4 credit y 1
Pool-CPU1 12 credit y 5
Pool-CPU2 16 credit y 3


dom0 ends up being assigned the first 2 physical cores on the first CPU (4 logicals)

I then have cpu pools for the remainder of the first cpu and then the whole entire second CPU.

This gives me plenty of dedicated horsepower for handling IO and s/w RAID ops.

xl cpupool-numa-split
xl cpupool-rename Pool-0 Pool-dom0;

xl cpupool-cpu-remove Pool-dom0 4
xl cpupool-cpu-remove Pool-dom0 5
.
.
.

xl cpupool-create name=\"Pool-CPU1\" cpus=["4","5","6","7","8","9","10","11","12","13","14","15"]
xl cpupool-create name=\"Pool-CPU2\" cpus=["16","17","18","19","20","21","22","23","24","25","26","27","28","29","30","31"]



Brian
Re: Domain-0 CPU pinning - is it still a best practice? [ In reply to ]
> cpus = "all,^0,^1"

I rather share all CPUs and let the scheduler prioritize dom0 as such
(default weight is 256):

xl sched-credit2 --domain=0 --weight=512

The only drawback is that I need to be careful on what I do on the host,
to let some juice available for the guests. The good thing about it is
that the dom0 also shares all cores, I guess)

Sorry it's not a definitive answer, as I did not compare and
stress-bench this against an old-school pinning setup.

--
Pierre-Philipp Braun
SMTP Health Campaign: enforce STARTTLS and verify MX certificates
<https://nethence.com/smtp/>
Re: Domain-0 CPU pinning - is it still a best practice? [ In reply to ]
Some things I feel comfortable entrusting to the hypervisor scheduler; some things I don’t.

For the latter, it gives me more piece of mind to dedicated full blocks of resources.

IO and RAID ops (being core to almost any other operation performed on the machine) fall into that second category.


My $0.02.



> On Jan 31, 2021, at 3:43 PM, Pierre-Philipp Braun <pbraun@nethence.com> wrote:
>
>> cpus = "all,^0,^1"
>
> I rather share all CPUs and let the scheduler prioritize dom0 as such (default weight is 256):
>
> xl sched-credit2 --domain=0 --weight=512
>
> The only drawback is that I need to be careful on what I do on the host, to let some juice available for the guests. The good thing about it is that the dom0 also shares all cores, I guess)
>
> Sorry it's not a definitive answer, as I did not compare and stress-bench this against an old-school pinning setup.