Mailing List Archive

Aligning Xen to physical memory maps on embedded systems
Hi,

I am booting True Dom0-less on Xilinx ZynqMP UltraScale+ using Xen 4.11,
taken from https://github.com/Xilinx/xen.

The system has 2GB of RAM (0x00000000 - 0x80000000) of which Xen and the DomU
have an allocation of 1.25GB, per this memory map:
1. DomU1: 0x60000000 - 0x80000000
2. DomU2: 0x40000000 - 0x60000000
3. Xen: 0x30000000 - 0x40000000

I am able to support True Dom0-less by means of the patch/hack demonstrated
By Stefano Stabellini at https://youtu.be/UfiP9eAV0WA?t=1746.

I was able to forcefully put the Xen binary at the address range immediately
below 0x40000000 by means of modifying get_xen_paddr() - in itself an ugly hack.

My questions are:
1. Since Xen performs runtime allocations from its heap, it is allocating
downwards from 0x80000000 - thereby "stealing" memory from DomU1.
Can I force the runtime allocations to be from a specific address range?
2. Has the issue of physical memory map address maps been addressed by Xen for embedded?

Thank you.
Dov
The information in this e-mail transmission contains proprietary and business
sensitive information. Unauthorized interception of this e-mail may constitute
a violation of law. If you are not the intended recipient, you are hereby
notified that any review, dissemination, distribution or duplication of this
communication is strictly prohibited. You are also asked to contact the sender
by reply email and immediately destroy all copies of the original message.
Re: Aligning Xen to physical memory maps on embedded systems [ In reply to ]
(+ Stefano)

On 21/02/2021 16:30, Levenglick Dov wrote:
> Hi,

Hi,

> I am booting True Dom0-less on Xilinx ZynqMP UltraScale+ using Xen 4.11,
> taken from https://github.com/Xilinx/xen.

This tree is not an official Xen Project tree. I can provide feedback
based on how Xen upstream works, but I don't know for sure if this will
apply to the Xilinx tree.

For any support, I would recommend to contect Xilinx directly.

> The system has 2GB of RAM (0x00000000 - 0x80000000) of which Xen and the DomU
> have an allocation of 1.25GB, per this memory map:
> 1. DomU1: 0x60000000 - 0x80000000
> 2. DomU2: 0x40000000 - 0x60000000
> 3. Xen: 0x30000000 - 0x40000000

How did you tell Xen which regions is assigned to which guests? Are your
domain mapped 1:1 (i.e guest physical address == host physical address)?

>
> I am able to support True Dom0-less by means of the patch/hack demonstrated
> By Stefano Stabellini at https://youtu.be/UfiP9eAV0WA?t=1746.
>
> I was able to forcefully put the Xen binary at the address range immediately
> below 0x40000000 by means of modifying get_xen_paddr() - in itself an ugly hack.
>
> My questions are:
> 1. Since Xen performs runtime allocations from its heap, it is allocating
> downwards from 0x80000000 - thereby "stealing" memory from DomU1.

In theory, any memory reserved for domains should have been carved out
from the heap allocator. This would be sufficient to prevent Xen
allocating memory from the ranges you described above.

Therefore, to me this looks like a bug in the tree you are using.

> Can I force the runtime allocations to be from a specific address range?
> 2. Has the issue of physical memory map address maps been addressed by Xen for embedded?

Xen 4.12+ will not relocate itself to the top of the memory anymore.
Instead, it will stay where it was first loaded in memory.

I would recommend to ask Xilinx if they can provide you with a more
recent tree.

>
> Thank you.
> Dov
> The information in this e-mail transmission contains proprietary and business
> sensitive information. Unauthorized interception of this e-mail may constitute
> a violation of law. If you are not the intended recipient, you are hereby
> notified that any review, dissemination, distribution or duplication of this
> communication is strictly prohibited. You are also asked to contact the sender
> by reply email and immediately destroy all copies of the original message.
In general, disclaimers should be dropped from e-mail sent to public
mailing list (by defintion your e-mail is widely distributed).

Cheers,

--
Julien Grall
RE: Aligning Xen to physical memory maps on embedded systems [ In reply to ]
Hi Julien et al.,

>
> (+ Stefano)
>
> On 21/02/2021 16:30, Levenglick Dov wrote:
> > Hi,
>
> Hi,
>
> > I am booting True Dom0-less on Xilinx ZynqMP UltraScale+ using Xen
> > 4.11, taken from https://github.com/Xilinx/xen.
>
> This tree is not an official Xen Project tree. I can provide feedback based on
> how Xen upstream works, but I don't know for sure if this will apply to the
> Xilinx tree.
>
> For any support, I would recommend to contect Xilinx directly.

I will approach their representatives. Can you comment regarding the approach that I
outline in the rest of the mail as though it were referring to the Xen upstream?

> > The system has 2GB of RAM (0x00000000 - 0x80000000) of which Xen and
> > the DomU have an allocation of 1.25GB, per this memory map:
> > 1. DomU1: 0x60000000 - 0x80000000
> > 2. DomU2: 0x40000000 - 0x60000000
> > 3. Xen: 0x30000000 - 0x40000000
>
> How did you tell Xen which regions is assigned to which guests? Are your
> domain mapped 1:1 (i.e guest physical address == host physical address)?

I am working on a solution where if the "xen,domain" memory has #size-cell cells the
content is backward compatible. But if it contains (#address-cells + #size-cells), the
address cells should be considered the physical start address.
During the mapping of the entire address space insetup_mm(), the carved out addresses
would be added to the reserved memory address space. When the DomU is to be created,
this physical space would be mapped to it. The virtual addresses are less of an issue and
needn't be mapped 1x1 (although they could be).

>
> >
> > I am able to support True Dom0-less by means of the patch/hack
> > demonstrated By Stefano Stabellini at
> https://youtu.be/UfiP9eAV0WA?t=1746.
> >
> > I was able to forcefully put the Xen binary at the address range
> > immediately below 0x40000000 by means of modifying get_xen_paddr() -
> in itself an ugly hack.
> >
> > My questions are:
> > 1. Since Xen performs runtime allocations from its heap, it is allocating
> > downwards from 0x80000000 - thereby "stealing" memory from DomU1.
>
> In theory, any memory reserved for domains should have been carved out
> from the heap allocator. This would be sufficient to prevent Xen allocating
> memory from the ranges you described above.
>
> Therefore, to me this looks like a bug in the tree you are using.

This would be a better approach, but because Xen perform allocations from its
heap prior to allocating memory to DomU - and since it allocates from the top of
the heap - it is basically taking memory that I wanted to set aside for the DomU.
This is why I am thinking of reserving the memory.

>
> > Can I force the runtime allocations to be from a specific address range?
> > 2. Has the issue of physical memory map address maps been addressed by
> Xen for embedded?
>
> Xen 4.12+ will not relocate itself to the top of the memory anymore.
> Instead, it will stay where it was first loaded in memory.
>
> I would recommend to ask Xilinx if they can provide you with a more recent
> tree.

Will do.

>
> >
> > Thank you.
> > Dov
>
> Cheers,
>
> --
> Julien Grall

Thank you,
Dov


The information in this e-mail transmission contains proprietary and business
sensitive information. Unauthorized interception of this e-mail may constitute
a violation of law. If you are not the intended recipient, you are hereby
notified that any review, dissemination, distribution or duplication of this
communication is strictly prohibited. You are also asked to contact the sender
by reply email and immediately destroy all copies of the original message.
Re: Aligning Xen to physical memory maps on embedded systems [ In reply to ]
Hi,

On 22/02/2021 13:37, Levenglick Dov wrote:
>> (+ Stefano)
>>
>> On 21/02/2021 16:30, Levenglick Dov wrote:
>>> Hi,
>>
>> Hi,
>>
>>> I am booting True Dom0-less on Xilinx ZynqMP UltraScale+ using Xen
>>> 4.11, taken from https://github.com/Xilinx/xen.
>>
>> This tree is not an official Xen Project tree. I can provide feedback based on
>> how Xen upstream works, but I don't know for sure if this will apply to the
>> Xilinx tree.
>>
>> For any support, I would recommend to contect Xilinx directly.
>
> I will approach their representatives. Can you comment regarding the approach that I
> outline in the rest of the mail as though it were referring to the Xen upstream?
>
>>> The system has 2GB of RAM (0x00000000 - 0x80000000) of which Xen and
>>> the DomU have an allocation of 1.25GB, per this memory map:
>>> 1. DomU1: 0x60000000 - 0x80000000
>>> 2. DomU2: 0x40000000 - 0x60000000
>>> 3. Xen: 0x30000000 - 0x40000000
>>
>> How did you tell Xen which regions is assigned to which guests? Are your
>> domain mapped 1:1 (i.e guest physical address == host physical address)?
>
> I am working on a solution where if the "xen,domain" memory has #size-cell cells the
> content is backward compatible. But if it contains (#address-cells + #size-cells), the
> address cells should be considered the physical start address.
> During the mapping of the entire address space insetup_mm(), the carved out addresses
> would be added to the reserved memory address space. When the DomU is to be created,
> this physical space would be mapped to it. The virtual addresses are less of an issue and
> needn't be mapped 1x1 (although they could be).
>
>>
>>>
>>> I am able to support True Dom0-less by means of the patch/hack
>>> demonstrated By Stefano Stabellini at
>> https://youtu.be/UfiP9eAV0WA?t=1746.
>>>
>>> I was able to forcefully put the Xen binary at the address range
>>> immediately below 0x40000000 by means of modifying get_xen_paddr() -
>> in itself an ugly hack.
>>>
>>> My questions are:
>>> 1. Since Xen performs runtime allocations from its heap, it is allocating
>>> downwards from 0x80000000 - thereby "stealing" memory from DomU1.
>>
>> In theory, any memory reserved for domains should have been carved out
>> from the heap allocator. This would be sufficient to prevent Xen allocating
>> memory from the ranges you described above.
>>
>> Therefore, to me this looks like a bug in the tree you are using.
>
> This would be a better approach, but because Xen perform allocations from its
> heap prior to allocating memory to DomU - and since it allocates from the top of
> the heap - it is basically taking memory that I wanted to set aside for the DomU.
> This is why I am thinking of reserving the memory.

That's correct. We want to carve out memory from the heap allocator so
it can't be used by Xen. I would recommend to read [1] as we discussed
the issue in the greater length about reserving memory.

Cheers,

[1]
https://lore.kernel.org/xen-devel/a316ed70-da35-8be0-a092-d992e56563d2@xen.org/

--
Julien Grall
Re: Aligning Xen to physical memory maps on embedded systems [ In reply to ]
On Mon, 22 Feb 2021, Julien Grall wrote:
> (+ Stefano)

Hi Julien, thanks for CCing me.


> On 21/02/2021 16:30, Levenglick Dov wrote:
> > Hi,
>
> Hi,
>
> > I am booting True Dom0-less on Xilinx ZynqMP UltraScale+ using Xen 4.11,
> > taken from https://github.com/Xilinx/xen.
>
> This tree is not an official Xen Project tree. I can provide feedback based on
> how Xen upstream works, but I don't know for sure if this will apply to the
> Xilinx tree.
>
> For any support, I would recommend to contect Xilinx directly.
>
> > The system has 2GB of RAM (0x00000000 - 0x80000000) of which Xen and the
> > DomU
> > have an allocation of 1.25GB, per this memory map:
> > 1. DomU1: 0x60000000 - 0x80000000
> > 2. DomU2: 0x40000000 - 0x60000000
> > 3. Xen: 0x30000000 - 0x40000000
>
> How did you tell Xen which regions is assigned to which guests? Are your
> domain mapped 1:1 (i.e guest physical address == host physical address)?
>
> > I am able to support True Dom0-less by means of the patch/hack demonstrated
> > By Stefano Stabellini at https://youtu.be/UfiP9eAV0WA?t=1746.
> >
> > I was able to forcefully put the Xen binary at the address range immediately
> > below 0x40000000 by means of modifying get_xen_paddr() - in itself an ugly
> > hack.
> >
> > My questions are:
> > 1. Since Xen performs runtime allocations from its heap, it is allocating
> > downwards from 0x80000000 - thereby "stealing" memory from DomU1.
>
> In theory, any memory reserved for domains should have been carved out from
> the heap allocator. This would be sufficient to prevent Xen allocating memory
> from the ranges you described above.
>
> Therefore, to me this looks like a bug in the tree you are using.
>
> > Can I force the runtime allocations to be from a specific address range?
> > 2. Has the issue of physical memory map address maps been addressed by Xen
> > for embedded?
>
> Xen 4.12+ will not relocate itself to the top of the memory anymore. Instead,
> it will stay where it was first loaded in memory.
>
> I would recommend to ask Xilinx if they can provide you with a more recent
> tree.

The following is based on 4.13:

https://github.com/xilinx/xen/tree/xilinx/release-2020.2
RE: Aligning Xen to physical memory maps on embedded systems [ In reply to ]
On Mon, 22 Feb 2021, Levenglick Dov wrote:
> > > The system has 2GB of RAM (0x00000000 - 0x80000000) of which Xen and
> > > the DomU have an allocation of 1.25GB, per this memory map:
> > > 1. DomU1: 0x60000000 - 0x80000000
> > > 2. DomU2: 0x40000000 - 0x60000000
> > > 3. Xen: 0x30000000 - 0x40000000
> >
> > How did you tell Xen which regions is assigned to which guests? Are your
> > domain mapped 1:1 (i.e guest physical address == host physical address)?
>
> I am working on a solution where if the "xen,domain" memory has #size-cell cells the
> content is backward compatible. But if it contains (#address-cells + #size-cells), the
> address cells should be considered the physical start address.
> During the mapping of the entire address space insetup_mm(), the carved out addresses
> would be added to the reserved memory address space. When the DomU is to be created,
> this physical space would be mapped to it. The virtual addresses are less of an issue and
> needn't be mapped 1x1 (although they could be).

As of today neither upstream Xen nor the Xilinx Xen tree come with the
feature of allowing the specification of an address range for dom0less
guests.

The only thing that Xilinx Xen allows, which is not upstream yet, is the
ability of creating dom0less guests 1:1 mapped using the "direct-map"
property. But the memory allocation is still done by Xen (you can't
select the addresses).

Some time ago I worked on a hacky prototype to allow the specification
of address ranges, see:

http://xenbits.xenproject.org/git-http/people/sstabellini/xen-unstable.git direct-map-2
from 7372466b21c3b6c96bb7a52754e432bac883a1e3 onward.

In particular, have a look at "xen/arm: introduce 1:1 mapping for
domUs". The work is not complete: it might not work depending on the
memory ranges you select for your domUs. In particular, you can't select
top-of-RAM addresses for your domUs. However, it might help you getting
started.


> > > I am able to support True Dom0-less by means of the patch/hack
> > > demonstrated By Stefano Stabellini at
> > https://youtu.be/UfiP9eAV0WA?t=1746.
> > >
> > > I was able to forcefully put the Xen binary at the address range
> > > immediately below 0x40000000 by means of modifying get_xen_paddr() -
> > in itself an ugly hack.
> > >
> > > My questions are:
> > > 1. Since Xen performs runtime allocations from its heap, it is allocating
> > > downwards from 0x80000000 - thereby "stealing" memory from DomU1.
> >
> > In theory, any memory reserved for domains should have been carved out
> > from the heap allocator. This would be sufficient to prevent Xen allocating
> > memory from the ranges you described above.
> >
> > Therefore, to me this looks like a bug in the tree you are using.
>
> This would be a better approach, but because Xen perform allocations from its
> heap prior to allocating memory to DomU - and since it allocates from the top of
> the heap - it is basically taking memory that I wanted to set aside for the DomU.

Yeah, this is the main problem that my prototype above couldn't solve.
Re: Aligning Xen to physical memory maps on embedded systems [ In reply to ]
(+ Penny, Wei and Luca)

> On 23 Feb 2021, at 01:52, Stefano Stabellini <sstabellini@kernel.org> wrote:
>
> On Mon, 22 Feb 2021, Levenglick Dov wrote:
>>>> The system has 2GB of RAM (0x00000000 - 0x80000000) of which Xen and
>>>> the DomU have an allocation of 1.25GB, per this memory map:
>>>> 1. DomU1: 0x60000000 - 0x80000000
>>>> 2. DomU2: 0x40000000 - 0x60000000
>>>> 3. Xen: 0x30000000 - 0x40000000
>>>
>>> How did you tell Xen which regions is assigned to which guests? Are your
>>> domain mapped 1:1 (i.e guest physical address == host physical address)?
>>
>> I am working on a solution where if the "xen,domain" memory has #size-cell cells the
>> content is backward compatible. But if it contains (#address-cells + #size-cells), the
>> address cells should be considered the physical start address.
>> During the mapping of the entire address space insetup_mm(), the carved out addresses
>> would be added to the reserved memory address space. When the DomU is to be created,
>> this physical space would be mapped to it. The virtual addresses are less of an issue and
>> needn't be mapped 1x1 (although they could be).
>
> As of today neither upstream Xen nor the Xilinx Xen tree come with the
> feature of allowing the specification of an address range for dom0less
> guests.
>
> The only thing that Xilinx Xen allows, which is not upstream yet, is the
> ability of creating dom0less guests 1:1 mapped using the "direct-map"
> property. But the memory allocation is still done by Xen (you can't
> select the addresses).
>
> Some time ago I worked on a hacky prototype to allow the specification
> of address ranges, see:
>
> http://xenbits.xenproject.org/git-http/people/sstabellini/xen-unstable.git direct-map-2
> from 7372466b21c3b6c96bb7a52754e432bac883a1e3 onward.
>
> In particular, have a look at "xen/arm: introduce 1:1 mapping for
> domUs". The work is not complete: it might not work depending on the
> memory ranges you select for your domUs. In particular, you can't select
> top-of-RAM addresses for your domUs. However, it might help you getting
> started.
>
>
>>>> I am able to support True Dom0-less by means of the patch/hack
>>>> demonstrated By Stefano Stabellini at
>>> https://youtu.be/UfiP9eAV0WA?t=1746.
>>>>
>>>> I was able to forcefully put the Xen binary at the address range
>>>> immediately below 0x40000000 by means of modifying get_xen_paddr() -
>>> in itself an ugly hack.
>>>>
>>>> My questions are:
>>>> 1. Since Xen performs runtime allocations from its heap, it is allocating
>>>> downwards from 0x80000000 - thereby "stealing" memory from DomU1.
>>>
>>> In theory, any memory reserved for domains should have been carved out
>>> from the heap allocator. This would be sufficient to prevent Xen allocating
>>> memory from the ranges you described above.
>>>
>>> Therefore, to me this looks like a bug in the tree you are using.
>>
>> This would be a better approach, but because Xen perform allocations from its
>> heap prior to allocating memory to DomU - and since it allocates from the top of
>> the heap - it is basically taking memory that I wanted to set aside for the DomU.
>
> Yeah, this is the main problem that my prototype above couldn't solve.
>

Wei and Penny are working on direct map and static allocation to fit embedded use
cases an might have more answer there.

On the fix from Stefano explained in the video, Luca Fancellu made a patch to propose
a long term solution and will push it upstream next week.

Cheers
Bertrand
RE: Aligning Xen to physical memory maps on embedded systems [ In reply to ]
>
> (+ Penny, Wei and Luca)
>
> > On 23 Feb 2021, at 01:52, Stefano Stabellini <sstabellini@kernel.org> wrote:
> >
> > On Mon, 22 Feb 2021, Levenglick Dov wrote:
> >>>> The system has 2GB of RAM (0x00000000 - 0x80000000) of which Xen
> >>>> and the DomU have an allocation of 1.25GB, per this memory map:
> >>>> 1. DomU1: 0x60000000 - 0x80000000
> >>>> 2. DomU2: 0x40000000 - 0x60000000
> >>>> 3. Xen: 0x30000000 - 0x40000000
> >>>
> >>> How did you tell Xen which regions is assigned to which guests? Are
> >>> your domain mapped 1:1 (i.e guest physical address == host physical
> address)?
> >>
> >> I am working on a solution where if the "xen,domain" memory has
> >> #size-cell cells the content is backward compatible. But if it
> >> contains (#address-cells + #size-cells), the address cells should be
> considered the physical start address.
> >> During the mapping of the entire address space insetup_mm(), the
> >> carved out addresses would be added to the reserved memory address
> >> space. When the DomU is to be created, this physical space would be
> >> mapped to it. The virtual addresses are less of an issue and needn't be
> mapped 1x1 (although they could be).
> >
> > As of today neither upstream Xen nor the Xilinx Xen tree come with the
> > feature of allowing the specification of an address range for dom0less
> > guests.
> >
> > The only thing that Xilinx Xen allows, which is not upstream yet, is
> > the ability of creating dom0less guests 1:1 mapped using the "direct-map"
> > property. But the memory allocation is still done by Xen (you can't
> > select the addresses).
> >
> > Some time ago I worked on a hacky prototype to allow the specification
> > of address ranges, see:
> >
> > http://xenbits.xenproject.org/git-http/people/sstabellini/xen-unstable
> > .git direct-map-2 from 7372466b21c3b6c96bb7a52754e432bac883a1e3
> onward.
> >
> > In particular, have a look at "xen/arm: introduce 1:1 mapping for
> > domUs". The work is not complete: it might not work depending on the
> > memory ranges you select for your domUs. In particular, you can't
> > select top-of-RAM addresses for your domUs. However, it might help you
> > getting started.
> >
> >
> >>>> I am able to support True Dom0-less by means of the patch/hack
> >>>> demonstrated By Stefano Stabellini at
> >>> https://youtu.be/UfiP9eAV0WA?t=1746.
> >>>>
> >>>> I was able to forcefully put the Xen binary at the address range
> >>>> immediately below 0x40000000 by means of modifying
> get_xen_paddr()
> >>>> -
> >>> in itself an ugly hack.
> >>>>
> >>>> My questions are:
> >>>> 1. Since Xen performs runtime allocations from its heap, it is allocating
> >>>> downwards from 0x80000000 - thereby "stealing" memory from
> DomU1.
> >>>
> >>> In theory, any memory reserved for domains should have been carved
> >>> out from the heap allocator. This would be sufficient to prevent Xen
> >>> allocating memory from the ranges you described above.
> >>>
> >>> Therefore, to me this looks like a bug in the tree you are using.
> >>
> >> This would be a better approach, but because Xen perform allocations
> >> from its heap prior to allocating memory to DomU - and since it
> >> allocates from the top of the heap - it is basically taking memory that I
> wanted to set aside for the DomU.
> >
> > Yeah, this is the main problem that my prototype above couldn't solve.

Stephano: Is the approach that I previously described a feasible one?
1. Mark the addresses that I want to set aside as reserved
2. When reaching the proper DomU, map them and then use the mapping
This approach would solve the heap issue
> >
>
> Wei and Penny are working on direct map and static allocation to fit
> embedded use cases an might have more answer there.

Bertrand, Wei and Penny: Is there a "sneak preview"? I'd be happy to start backporting to Xen 4.11

>
> On the fix from Stefano explained in the video, Luca Fancellu made a patch to
> propose a long term solution and will push it upstream next week.

Bertrand: Do You know which commit ID this is? Since I'm working on a Xilinx fork, I am out of touch with the goings of the main tree.


Thanks,
Dov
The information in this e-mail transmission contains proprietary and business
sensitive information. Unauthorized interception of this e-mail may constitute
a violation of law. If you are not the intended recipient, you are hereby
notified that any review, dissemination, distribution or duplication of this
communication is strictly prohibited. You are also asked to contact the sender
by reply email and immediately destroy all copies of the original message.
RE: Aligning Xen to physical memory maps on embedded systems [ In reply to ]
On Mon, 1 Mar 2021, Levenglick Dov wrote:
> > (+ Penny, Wei and Luca)
> >
> > > On 23 Feb 2021, at 01:52, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > >
> > > On Mon, 22 Feb 2021, Levenglick Dov wrote:
> > >>>> The system has 2GB of RAM (0x00000000 - 0x80000000) of which Xen
> > >>>> and the DomU have an allocation of 1.25GB, per this memory map:
> > >>>> 1. DomU1: 0x60000000 - 0x80000000
> > >>>> 2. DomU2: 0x40000000 - 0x60000000
> > >>>> 3. Xen: 0x30000000 - 0x40000000
> > >>>
> > >>> How did you tell Xen which regions is assigned to which guests? Are
> > >>> your domain mapped 1:1 (i.e guest physical address == host physical
> > address)?
> > >>
> > >> I am working on a solution where if the "xen,domain" memory has
> > >> #size-cell cells the content is backward compatible. But if it
> > >> contains (#address-cells + #size-cells), the address cells should be
> > considered the physical start address.
> > >> During the mapping of the entire address space insetup_mm(), the
> > >> carved out addresses would be added to the reserved memory address
> > >> space. When the DomU is to be created, this physical space would be
> > >> mapped to it. The virtual addresses are less of an issue and needn't be
> > mapped 1x1 (although they could be).
> > >
> > > As of today neither upstream Xen nor the Xilinx Xen tree come with the
> > > feature of allowing the specification of an address range for dom0less
> > > guests.
> > >
> > > The only thing that Xilinx Xen allows, which is not upstream yet, is
> > > the ability of creating dom0less guests 1:1 mapped using the "direct-map"
> > > property. But the memory allocation is still done by Xen (you can't
> > > select the addresses).
> > >
> > > Some time ago I worked on a hacky prototype to allow the specification
> > > of address ranges, see:
> > >
> > > http://xenbits.xenproject.org/git-http/people/sstabellini/xen-unstable
> > > .git direct-map-2 from 7372466b21c3b6c96bb7a52754e432bac883a1e3
> > onward.
> > >
> > > In particular, have a look at "xen/arm: introduce 1:1 mapping for
> > > domUs". The work is not complete: it might not work depending on the
> > > memory ranges you select for your domUs. In particular, you can't
> > > select top-of-RAM addresses for your domUs. However, it might help you
> > > getting started.
> > >
> > >
> > >>>> I am able to support True Dom0-less by means of the patch/hack
> > >>>> demonstrated By Stefano Stabellini at
> > >>> https://youtu.be/UfiP9eAV0WA?t=1746.
> > >>>>
> > >>>> I was able to forcefully put the Xen binary at the address range
> > >>>> immediately below 0x40000000 by means of modifying
> > get_xen_paddr()
> > >>>> -
> > >>> in itself an ugly hack.
> > >>>>
> > >>>> My questions are:
> > >>>> 1. Since Xen performs runtime allocations from its heap, it is allocating
> > >>>> downwards from 0x80000000 - thereby "stealing" memory from
> > DomU1.
> > >>>
> > >>> In theory, any memory reserved for domains should have been carved
> > >>> out from the heap allocator. This would be sufficient to prevent Xen
> > >>> allocating memory from the ranges you described above.
> > >>>
> > >>> Therefore, to me this looks like a bug in the tree you are using.
> > >>
> > >> This would be a better approach, but because Xen perform allocations
> > >> from its heap prior to allocating memory to DomU - and since it
> > >> allocates from the top of the heap - it is basically taking memory that I
> > wanted to set aside for the DomU.
> > >
> > > Yeah, this is the main problem that my prototype above couldn't solve.
>
> Stephano: Is the approach that I previously described a feasible one?
> 1. Mark the addresses that I want to set aside as reserved
> 2. When reaching the proper DomU, map them and then use the mapping
> This approach would solve the heap issue

My first suggestion would be actually to let the hypervisor pick the
address ranges. If you don't change setup, you'll see that they are
actually stable across reboot. WARNING: Xen doesn't promise that they
are stable; however, in practice, they are stable unless you change
device tree or configuration or software versions.

That said, yes, I think your approach might work with some limitations
(e.g. Xen reclaiming memory on domU destruction but you probably don't
care about that). It could be a decent stopgap until we get a better
solution.

From a Xen upstream point of view, it makes sense to follow the approach
used by Penny, Wei, and Betrand that seems to be the one that is more
flexible and integrate better with the existing codebase.



> > Wei and Penny are working on direct map and static allocation to fit
> > embedded use cases an might have more answer there.
>
> Bertrand, Wei and Penny: Is there a "sneak preview"? I'd be happy to start backporting to Xen 4.11

As mentioned, there is a 4.13-based Xilinx Xen tree available too.
RE: Aligning Xen to physical memory maps on embedded systems [ In reply to ]
Thank you.
A few final comments below + one last question regarding the Xilinx forks:
Xen 4.13 is first available on the 2020.1 branch. Is it required that the 2020.1 branch of linux-xlnx be used as well, or can I keep the 2019.1 branch that I am currently using?


> -----Original Message-----
> From: Stefano Stabellini <stefano.stabellini@xilinx.com>
> Sent: Tuesday, March 2, 2021 3:42 AM
> To: Levenglick Dov <Dov.Levenglick@elbitsystems.com>
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Stefano Stabellini
> <sstabellini@kernel.org>; Julien Grall <julien@xen.org>; Xen-
> users@lists.xenproject.org; Wei Chen <Wei.Chen@arm.com>; Penny Zheng
> <Penny.Zheng@arm.com>; Luca Fancellu <Luca.Fancellu@arm.com>
> Subject: RE: Aligning Xen to physical memory maps on embedded systems
>
> On Mon, 1 Mar 2021, Levenglick Dov wrote:
> > > (+ Penny, Wei and Luca)
> > >
> > > > On 23 Feb 2021, at 01:52, Stefano Stabellini <sstabellini@kernel.org>
> wrote:
> > > >
> > > > On Mon, 22 Feb 2021, Levenglick Dov wrote:
> > > >>>> The system has 2GB of RAM (0x00000000 - 0x80000000) of which
> > > >>>> Xen and the DomU have an allocation of 1.25GB, per this memory
> map:
> > > >>>> 1. DomU1: 0x60000000 - 0x80000000 2. DomU2: 0x40000000 -
> > > >>>> 0x60000000 3. Xen: 0x30000000 - 0x40000000
> > > >>>
> > > >>> How did you tell Xen which regions is assigned to which guests?
> > > >>> Are your domain mapped 1:1 (i.e guest physical address == host
> > > >>> physical
> > > address)?
> > > >>
> > > >> I am working on a solution where if the "xen,domain" memory has
> > > >> #size-cell cells the content is backward compatible. But if it
> > > >> contains (#address-cells + #size-cells), the address cells should
> > > >> be
> > > considered the physical start address.
> > > >> During the mapping of the entire address space insetup_mm(), the
> > > >> carved out addresses would be added to the reserved memory
> > > >> address space. When the DomU is to be created, this physical
> > > >> space would be mapped to it. The virtual addresses are less of an
> > > >> issue and needn't be
> > > mapped 1x1 (although they could be).
> > > >
> > > > As of today neither upstream Xen nor the Xilinx Xen tree come with
> > > > the feature of allowing the specification of an address range for
> > > > dom0less guests.
> > > >
> > > > The only thing that Xilinx Xen allows, which is not upstream yet,
> > > > is the ability of creating dom0less guests 1:1 mapped using the "direct-
> map"
> > > > property. But the memory allocation is still done by Xen (you
> > > > can't select the addresses).
> > > >
> > > > Some time ago I worked on a hacky prototype to allow the
> > > > specification of address ranges, see:
> > > >
> > > > http://xenbits.xenproject.org/git-http/people/sstabellini/xen-unst
> > > > able .git direct-map-2 from
> > > > 7372466b21c3b6c96bb7a52754e432bac883a1e3
> > > onward.
> > > >
> > > > In particular, have a look at "xen/arm: introduce 1:1 mapping for
> > > > domUs". The work is not complete: it might not work depending on
> > > > the memory ranges you select for your domUs. In particular, you
> > > > can't select top-of-RAM addresses for your domUs. However, it
> > > > might help you getting started.
> > > >
> > > >
> > > >>>> I am able to support True Dom0-less by means of the patch/hack
> > > >>>> demonstrated By Stefano Stabellini at
> > > >>> https://youtu.be/UfiP9eAV0WA?t=1746.
> > > >>>>
> > > >>>> I was able to forcefully put the Xen binary at the address
> > > >>>> range immediately below 0x40000000 by means of modifying
> > > get_xen_paddr()
> > > >>>> -
> > > >>> in itself an ugly hack.
> > > >>>>
> > > >>>> My questions are:
> > > >>>> 1. Since Xen performs runtime allocations from its heap, it is
> allocating
> > > >>>> downwards from 0x80000000 - thereby "stealing" memory from
> > > DomU1.
> > > >>>
> > > >>> In theory, any memory reserved for domains should have been
> > > >>> carved out from the heap allocator. This would be sufficient to
> > > >>> prevent Xen allocating memory from the ranges you described
> above.
> > > >>>
> > > >>> Therefore, to me this looks like a bug in the tree you are using.
> > > >>
> > > >> This would be a better approach, but because Xen perform
> > > >> allocations from its heap prior to allocating memory to DomU -
> > > >> and since it allocates from the top of the heap - it is basically
> > > >> taking memory that I
> > > wanted to set aside for the DomU.
> > > >
> > > > Yeah, this is the main problem that my prototype above couldn't solve.
> >
> > Stephano: Is the approach that I previously described a feasible one?
> > 1. Mark the addresses that I want to set aside as reserved
> > 2. When reaching the proper DomU, map them and then use the mapping
> > This approach would solve the heap issue
>
> My first suggestion would be actually to let the hypervisor pick the address
> ranges. If you don't change setup, you'll see that they are actually stable
> across reboot. WARNING: Xen doesn't promise that they are stable;
> however, in practice, they are stable unless you change device tree or
> configuration or software versions.
>
> That said, yes, I think your approach might work with some limitations (e.g.
> Xen reclaiming memory on domU destruction but you probably don't care
> about that). It could be a decent stopgap until we get a better solution.

Is DomU destruction an option on true Dom0-less? Who would be doing the destruction?

> From a Xen upstream point of view, it makes sense to follow the approach
> used by Penny, Wei, and Betrand that seems to be the one that is more
> flexible and integrate better with the existing codebase.

I will wait for their response regarding commits and backporting.

> > > Wei and Penny are working on direct map and static allocation to fit
> > > embedded use cases an might have more answer there.
> >
> > Bertrand, Wei and Penny: Is there a "sneak preview"? I'd be happy to
> > start backporting to Xen 4.11
>
> As mentioned, there is a 4.13-based Xilinx Xen tree available too.
The information in this e-mail transmission contains proprietary and business
sensitive information. Unauthorized interception of this e-mail may constitute
a violation of law. If you are not the intended recipient, you are hereby
notified that any review, dissemination, distribution or duplication of this
communication is strictly prohibited. You are also asked to contact the sender
by reply email and immediately destroy all copies of the original message.
Re: Aligning Xen to physical memory maps on embedded systems [ In reply to ]
Hi,

> On 1 Mar 2021, at 13:46, Levenglick Dov <Dov.Levenglick@elbitsystems.com> wrote:
>
>>
>> (+ Penny, Wei and Luca)
>>
>>> On 23 Feb 2021, at 01:52, Stefano Stabellini <sstabellini@kernel.org> wrote:
>>>
>>> On Mon, 22 Feb 2021, Levenglick Dov wrote:
>>>>>> The system has 2GB of RAM (0x00000000 - 0x80000000) of which Xen
>>>>>> and the DomU have an allocation of 1.25GB, per this memory map:
>>>>>> 1. DomU1: 0x60000000 - 0x80000000
>>>>>> 2. DomU2: 0x40000000 - 0x60000000
>>>>>> 3. Xen: 0x30000000 - 0x40000000
>>>>>
>>>>> How did you tell Xen which regions is assigned to which guests? Are
>>>>> your domain mapped 1:1 (i.e guest physical address == host physical
>> address)?
>>>>
>>>> I am working on a solution where if the "xen,domain" memory has
>>>> #size-cell cells the content is backward compatible. But if it
>>>> contains (#address-cells + #size-cells), the address cells should be
>> considered the physical start address.
>>>> During the mapping of the entire address space insetup_mm(), the
>>>> carved out addresses would be added to the reserved memory address
>>>> space. When the DomU is to be created, this physical space would be
>>>> mapped to it. The virtual addresses are less of an issue and needn't be
>> mapped 1x1 (although they could be).
>>>
>>> As of today neither upstream Xen nor the Xilinx Xen tree come with the
>>> feature of allowing the specification of an address range for dom0less
>>> guests.
>>>
>>> The only thing that Xilinx Xen allows, which is not upstream yet, is
>>> the ability of creating dom0less guests 1:1 mapped using the "direct-map"
>>> property. But the memory allocation is still done by Xen (you can't
>>> select the addresses).
>>>
>>> Some time ago I worked on a hacky prototype to allow the specification
>>> of address ranges, see:
>>>
>>> http://xenbits.xenproject.org/git-http/people/sstabellini/xen-unstable
>>> .git direct-map-2 from 7372466b21c3b6c96bb7a52754e432bac883a1e3
>> onward.
>>>
>>> In particular, have a look at "xen/arm: introduce 1:1 mapping for
>>> domUs". The work is not complete: it might not work depending on the
>>> memory ranges you select for your domUs. In particular, you can't
>>> select top-of-RAM addresses for your domUs. However, it might help you
>>> getting started.
>>>
>>>
>>>>>> I am able to support True Dom0-less by means of the patch/hack
>>>>>> demonstrated By Stefano Stabellini at
>>>>> https://youtu.be/UfiP9eAV0WA?t=1746.
>>>>>>
>>>>>> I was able to forcefully put the Xen binary at the address range
>>>>>> immediately below 0x40000000 by means of modifying
>> get_xen_paddr()
>>>>>> -
>>>>> in itself an ugly hack.
>>>>>>
>>>>>> My questions are:
>>>>>> 1. Since Xen performs runtime allocations from its heap, it is allocating
>>>>>> downwards from 0x80000000 - thereby "stealing" memory from
>> DomU1.
>>>>>
>>>>> In theory, any memory reserved for domains should have been carved
>>>>> out from the heap allocator. This would be sufficient to prevent Xen
>>>>> allocating memory from the ranges you described above.
>>>>>
>>>>> Therefore, to me this looks like a bug in the tree you are using.
>>>>
>>>> This would be a better approach, but because Xen perform allocations
>>>> from its heap prior to allocating memory to DomU - and since it
>>>> allocates from the top of the heap - it is basically taking memory that I
>> wanted to set aside for the DomU.
>>>
>>> Yeah, this is the main problem that my prototype above couldn't solve.
>
> Stephano: Is the approach that I previously described a feasible one?
> 1. Mark the addresses that I want to set aside as reserved
> 2. When reaching the proper DomU, map them and then use the mapping
> This approach would solve the heap issue
>>>
>>
>> Wei and Penny are working on direct map and static allocation to fit
>> embedded use cases an might have more answer there.
>
> Bertrand, Wei and Penny: Is there a "sneak preview"? I'd be happy to start backporting to Xen 4.11

I am afraid we are not at this stage, we are on early development on this.

>
>>
>> On the fix from Stefano explained in the video, Luca Fancellu made a patch to
>> propose a long term solution and will push it upstream next week.
>
> Bertrand: Do You know which commit ID this is? Since I'm working on a Xilinx fork, I am out of touch with the goings of the main tree.

This will be pushed to the xen-devel mailing list next week.

Cheers
Bertrand

>
>
> Thanks,
> Dov
> The information in this e-mail transmission contains proprietary and business
> sensitive information. Unauthorized interception of this e-mail may constitute
> a violation of law. If you are not the intended recipient, you are hereby
> notified that any review, dissemination, distribution or duplication of this
> communication is strictly prohibited. You are also asked to contact the sender
> by reply email and immediately destroy all copies of the original message.
RE: Aligning Xen to physical memory maps on embedded systems [ In reply to ]
On Tue, 2 Mar 2021, Levenglick Dov wrote:
> Thank you.
> A few final comments below + one last question regarding the Xilinx forks:
> Xen 4.13 is first available on the 2020.1 branch. Is it required that the 2020.1 branch of linux-xlnx be used as well, or can I keep the 2019.1 branch that I am currently using?

Xilinx recommends to always use the same version everywhere, so 2020.1
for Xen, Linux, firwmare, etc.

That said, it should be no problem to use Xen 2020.1 with everything
else from 2019.1. Given that you are using dom0less, you just need to
rebuild the Xen hypervisor alone, you don't even need to update the dom0
rootfs.


> > -----Original Message-----
> > From: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > Sent: Tuesday, March 2, 2021 3:42 AM
> > To: Levenglick Dov <Dov.Levenglick@elbitsystems.com>
> > Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Stefano Stabellini
> > <sstabellini@kernel.org>; Julien Grall <julien@xen.org>; Xen-
> > users@lists.xenproject.org; Wei Chen <Wei.Chen@arm.com>; Penny Zheng
> > <Penny.Zheng@arm.com>; Luca Fancellu <Luca.Fancellu@arm.com>
> > Subject: RE: Aligning Xen to physical memory maps on embedded systems
> >
> > On Mon, 1 Mar 2021, Levenglick Dov wrote:
> > > > (+ Penny, Wei and Luca)
> > > >
> > > > > On 23 Feb 2021, at 01:52, Stefano Stabellini <sstabellini@kernel.org>
> > wrote:
> > > > >
> > > > > On Mon, 22 Feb 2021, Levenglick Dov wrote:
> > > > >>>> The system has 2GB of RAM (0x00000000 - 0x80000000) of which
> > > > >>>> Xen and the DomU have an allocation of 1.25GB, per this memory
> > map:
> > > > >>>> 1. DomU1: 0x60000000 - 0x80000000 2. DomU2: 0x40000000 -
> > > > >>>> 0x60000000 3. Xen: 0x30000000 - 0x40000000
> > > > >>>
> > > > >>> How did you tell Xen which regions is assigned to which guests?
> > > > >>> Are your domain mapped 1:1 (i.e guest physical address == host
> > > > >>> physical
> > > > address)?
> > > > >>
> > > > >> I am working on a solution where if the "xen,domain" memory has
> > > > >> #size-cell cells the content is backward compatible. But if it
> > > > >> contains (#address-cells + #size-cells), the address cells should
> > > > >> be
> > > > considered the physical start address.
> > > > >> During the mapping of the entire address space insetup_mm(), the
> > > > >> carved out addresses would be added to the reserved memory
> > > > >> address space. When the DomU is to be created, this physical
> > > > >> space would be mapped to it. The virtual addresses are less of an
> > > > >> issue and needn't be
> > > > mapped 1x1 (although they could be).
> > > > >
> > > > > As of today neither upstream Xen nor the Xilinx Xen tree come with
> > > > > the feature of allowing the specification of an address range for
> > > > > dom0less guests.
> > > > >
> > > > > The only thing that Xilinx Xen allows, which is not upstream yet,
> > > > > is the ability of creating dom0less guests 1:1 mapped using the "direct-
> > map"
> > > > > property. But the memory allocation is still done by Xen (you
> > > > > can't select the addresses).
> > > > >
> > > > > Some time ago I worked on a hacky prototype to allow the
> > > > > specification of address ranges, see:
> > > > >
> > > > > http://xenbits.xenproject.org/git-http/people/sstabellini/xen-unst
> > > > > able .git direct-map-2 from
> > > > > 7372466b21c3b6c96bb7a52754e432bac883a1e3
> > > > onward.
> > > > >
> > > > > In particular, have a look at "xen/arm: introduce 1:1 mapping for
> > > > > domUs". The work is not complete: it might not work depending on
> > > > > the memory ranges you select for your domUs. In particular, you
> > > > > can't select top-of-RAM addresses for your domUs. However, it
> > > > > might help you getting started.
> > > > >
> > > > >
> > > > >>>> I am able to support True Dom0-less by means of the patch/hack
> > > > >>>> demonstrated By Stefano Stabellini at
> > > > >>> https://youtu.be/UfiP9eAV0WA?t=1746.
> > > > >>>>
> > > > >>>> I was able to forcefully put the Xen binary at the address
> > > > >>>> range immediately below 0x40000000 by means of modifying
> > > > get_xen_paddr()
> > > > >>>> -
> > > > >>> in itself an ugly hack.
> > > > >>>>
> > > > >>>> My questions are:
> > > > >>>> 1. Since Xen performs runtime allocations from its heap, it is
> > allocating
> > > > >>>> downwards from 0x80000000 - thereby "stealing" memory from
> > > > DomU1.
> > > > >>>
> > > > >>> In theory, any memory reserved for domains should have been
> > > > >>> carved out from the heap allocator. This would be sufficient to
> > > > >>> prevent Xen allocating memory from the ranges you described
> > above.
> > > > >>>
> > > > >>> Therefore, to me this looks like a bug in the tree you are using.
> > > > >>
> > > > >> This would be a better approach, but because Xen perform
> > > > >> allocations from its heap prior to allocating memory to DomU -
> > > > >> and since it allocates from the top of the heap - it is basically
> > > > >> taking memory that I
> > > > wanted to set aside for the DomU.
> > > > >
> > > > > Yeah, this is the main problem that my prototype above couldn't solve.
> > >
> > > Stephano: Is the approach that I previously described a feasible one?
> > > 1. Mark the addresses that I want to set aside as reserved
> > > 2. When reaching the proper DomU, map them and then use the mapping
> > > This approach would solve the heap issue
> >
> > My first suggestion would be actually to let the hypervisor pick the address
> > ranges. If you don't change setup, you'll see that they are actually stable
> > across reboot. WARNING: Xen doesn't promise that they are stable;
> > however, in practice, they are stable unless you change device tree or
> > configuration or software versions.
> >
> > That said, yes, I think your approach might work with some limitations (e.g.
> > Xen reclaiming memory on domU destruction but you probably don't care
> > about that). It could be a decent stopgap until we get a better solution.
>
> Is DomU destruction an option on true Dom0-less? Who would be doing the destruction?

Destruction, yes. You should be able to use "xl destroy" in Dom0 already
today to destroy a dom0less domU. Pass a domid instead of domain name
(they don't have a domain name). Of course you need the xl tools in the
Xen rootfs for that, so if you are going to update Xen, then you also
need to update the Xen tools, hence the Dom0 rootfs. The Xen tools and
Xen actually need to be of the same version.

If you intend to create again a dom0less domain after destroying it
(reboot), then you need to have a config file in dom0 with the same
configuration so that you can call xl create.
RE: Aligning Xen to physical memory maps on embedded systems [ In reply to ]
Hi,

Sorry to reply late. This e-mail had been filtered by my e-mail client.
We have been working on direct mapping and static allocation for a while.
And Penny had sent an initial version of direct mapping design document
to mailing list in last December.

Now, we are working on a new version design document, the new version
design will address the feedbacks we have got from the initial version and
will also include the design of static allocation. This document is nearing
completion and we will be submitting it to the community for discussion
in the next week or two. Once we have some conclusions, we will soon be
submitting our code to the community to collect the RFC.

Besides, I have some comments below:

> -----Original Message-----
> From: Stefano Stabellini <stefano.stabellini@xilinx.com>
> Sent: 2021??3??3?? 3:36
> To: Levenglick Dov <Dov.Levenglick@elbitsystems.com>
> Cc: Stefano Stabellini <stefano.stabellini@xilinx.com>; Bertrand Marquis
> <Bertrand.Marquis@arm.com>; Stefano Stabellini <sstabellini@kernel.org>;
> Julien Grall <julien@xen.org>; Xen-users@lists.xenproject.org; Wei Chen
> <Wei.Chen@arm.com>; Penny Zheng <Penny.Zheng@arm.com>; Luca Fancellu
> <Luca.Fancellu@arm.com>
> Subject: RE: Aligning Xen to physical memory maps on embedded systems
>
> On Tue, 2 Mar 2021, Levenglick Dov wrote:
> > Thank you.
> > A few final comments below + one last question regarding the Xilinx forks:
> > Xen 4.13 is first available on the 2020.1 branch. Is it required that the 2020.1
> branch of linux-xlnx be used as well, or can I keep the 2019.1 branch that I am
> currently using?
>
> Xilinx recommends to always use the same version everywhere, so 2020.1
> for Xen, Linux, firwmare, etc.
>
> That said, it should be no problem to use Xen 2020.1 with everything
> else from 2019.1. Given that you are using dom0less, you just need to
> rebuild the Xen hypervisor alone, you don't even need to update the dom0
> rootfs.
>
>
> > > -----Original Message-----
> > > From: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > > Sent: Tuesday, March 2, 2021 3:42 AM
> > > To: Levenglick Dov <Dov.Levenglick@elbitsystems.com>
> > > Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Stefano Stabellini
> > > <sstabellini@kernel.org>; Julien Grall <julien@xen.org>; Xen-
> > > users@lists.xenproject.org; Wei Chen <Wei.Chen@arm.com>; Penny Zheng
> > > <Penny.Zheng@arm.com>; Luca Fancellu <Luca.Fancellu@arm.com>
> > > Subject: RE: Aligning Xen to physical memory maps on embedded systems
> > >
> > > On Mon, 1 Mar 2021, Levenglick Dov wrote:
> > > > > (+ Penny, Wei and Luca)
> > > > >
> > > > > > On 23 Feb 2021, at 01:52, Stefano Stabellini <sstabellini@kernel.org>
> > > wrote:
> > > > > >
> > > > > > On Mon, 22 Feb 2021, Levenglick Dov wrote:
> > > > > >>>> The system has 2GB of RAM (0x00000000 - 0x80000000) of which
> > > > > >>>> Xen and the DomU have an allocation of 1.25GB, per this memory
> > > map:
> > > > > >>>> 1. DomU1: 0x60000000 - 0x80000000 2. DomU2: 0x40000000 -
> > > > > >>>> 0x60000000 3. Xen: 0x30000000 - 0x40000000
> > > > > >>>
> > > > > >>> How did you tell Xen which regions is assigned to which guests?
> > > > > >>> Are your domain mapped 1:1 (i.e guest physical address == host
> > > > > >>> physical
> > > > > address)?
> > > > > >>
> > > > > >> I am working on a solution where if the "xen,domain" memory has
> > > > > >> #size-cell cells the content is backward compatible. But if it
> > > > > >> contains (#address-cells + #size-cells), the address cells should
> > > > > >> be
> > > > > considered the physical start address.
> > > > > >> During the mapping of the entire address space insetup_mm(), the
> > > > > >> carved out addresses would be added to the reserved memory
> > > > > >> address space. When the DomU is to be created, this physical
> > > > > >> space would be mapped to it. The virtual addresses are less of an
> > > > > >> issue and needn't be
> > > > > mapped 1x1 (although they could be).
> > > > > >
> > > > > > As of today neither upstream Xen nor the Xilinx Xen tree come with
> > > > > > the feature of allowing the specification of an address range for
> > > > > > dom0less guests.
> > > > > >
> > > > > > The only thing that Xilinx Xen allows, which is not upstream yet,
> > > > > > is the ability of creating dom0less guests 1:1 mapped using the "direct-
> > > map"
> > > > > > property. But the memory allocation is still done by Xen (you
> > > > > > can't select the addresses).
> > > > > >
> > > > > > Some time ago I worked on a hacky prototype to allow the
> > > > > > specification of address ranges, see:
> > > > > >
> > > > > > http://xenbits.xenproject.org/git-http/people/sstabellini/xen-unst
> > > > > > able .git direct-map-2 from
> > > > > > 7372466b21c3b6c96bb7a52754e432bac883a1e3
> > > > > onward.
> > > > > >
> > > > > > In particular, have a look at "xen/arm: introduce 1:1 mapping for
> > > > > > domUs". The work is not complete: it might not work depending on
> > > > > > the memory ranges you select for your domUs. In particular, you
> > > > > > can't select top-of-RAM addresses for your domUs. However, it
> > > > > > might help you getting started.
> > > > > >
> > > > > >
> > > > > >>>> I am able to support True Dom0-less by means of the patch/hack
> > > > > >>>> demonstrated By Stefano Stabellini at
> > > > > >>> https://youtu.be/UfiP9eAV0WA?t=1746.
> > > > > >>>>
> > > > > >>>> I was able to forcefully put the Xen binary at the address
> > > > > >>>> range immediately below 0x40000000 by means of modifying
> > > > > get_xen_paddr()
> > > > > >>>> -
> > > > > >>> in itself an ugly hack.
> > > > > >>>>
> > > > > >>>> My questions are:
> > > > > >>>> 1. Since Xen performs runtime allocations from its heap, it is
> > > allocating
> > > > > >>>> downwards from 0x80000000 - thereby "stealing" memory from
> > > > > DomU1.
> > > > > >>>
> > > > > >>> In theory, any memory reserved for domains should have been
> > > > > >>> carved out from the heap allocator. This would be sufficient to
> > > > > >>> prevent Xen allocating memory from the ranges you described
> > > above.
> > > > > >>>
> > > > > >>> Therefore, to me this looks like a bug in the tree you are using.
> > > > > >>
> > > > > >> This would be a better approach, but because Xen perform
> > > > > >> allocations from its heap prior to allocating memory to DomU -
> > > > > >> and since it allocates from the top of the heap - it is basically
> > > > > >> taking memory that I
> > > > > wanted to set aside for the DomU.
> > > > > >
> > > > > > Yeah, this is the main problem that my prototype above couldn't solve.
> > > >
> > > > Stephano: Is the approach that I previously described a feasible one?
> > > > 1. Mark the addresses that I want to set aside as reserved
> > > > 2. When reaching the proper DomU, map them and then use the mapping
> > > > This approach would solve the heap issue
> > >
> > > My first suggestion would be actually to let the hypervisor pick the address
> > > ranges. If you don't change setup, you'll see that they are actually stable
> > > across reboot. WARNING: Xen doesn't promise that they are stable;
> > > however, in practice, they are stable unless you change device tree or
> > > configuration or software versions.
> > >
> > > That said, yes, I think your approach might work with some limitations (e.g.
> > > Xen reclaiming memory on domU destruction but you probably don't care
> > > about that). It could be a decent stopgap until we get a better solution.
> >

In our new design, the user defined memory ranges for DomU and memory reclaiming
on DomU destruction have been considered already. These are two features that we
really want the community to discuss and get feedback on.

> > Is DomU destruction an option on true Dom0-less? Who would be doing the
> destruction?
>
> Destruction, yes. You should be able to use "xl destroy" in Dom0 already
> today to destroy a dom0less domU. Pass a domid instead of domain name
> (they don't have a domain name). Of course you need the xl tools in the
> Xen rootfs for that, so if you are going to update Xen, then you also
> need to update the Xen tools, hence the Dom0 rootfs. The Xen tools and
> Xen actually need to be of the same version.
>
> If you intend to create again a dom0less domain after destroying it
> (reboot), then you need to have a config file in dom0 with the same
> configuration so that you can call xl create.

Cheers,
Wei Chen
RE: Aligning Xen to physical memory maps on embedded systems [ In reply to ]
Thank you all

Dov


> -----Original Message-----
> From: Wei Chen <Wei.Chen@arm.com>
> Sent: Wednesday, March 3, 2021 8:43 AM
> To: Stefano Stabellini <stefano.stabellini@xilinx.com>; Levenglick Dov
> <Dov.Levenglick@elbitsystems.com>
> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Stefano Stabellini
> <sstabellini@kernel.org>; Julien Grall <julien@xen.org>; Xen-
> users@lists.xenproject.org; Penny Zheng <Penny.Zheng@arm.com>; Luca
> Fancellu <Luca.Fancellu@arm.com>
> Subject: RE: Aligning Xen to physical memory maps on embedded systems
>
> Hi,
>
> Sorry to reply late. This e-mail had been filtered by my e-mail client.
> We have been working on direct mapping and static allocation for a while.
> And Penny had sent an initial version of direct mapping design document to
> mailing list in last December.
>
> Now, we are working on a new version design document, the new version
> design will address the feedbacks we have got from the initial version and
> will also include the design of static allocation. This document is nearing
> completion and we will be submitting it to the community for discussion in
> the next week or two. Once we have some conclusions, we will soon be
> submitting our code to the community to collect the RFC.
>
> Besides, I have some comments below:
>
> > -----Original Message-----
> > From: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > Sent: 2021年3月3日 3:36
> > To: Levenglick Dov <Dov.Levenglick@elbitsystems.com>
> > Cc: Stefano Stabellini <stefano.stabellini@xilinx.com>; Bertrand
> > Marquis <Bertrand.Marquis@arm.com>; Stefano Stabellini
> > <sstabellini@kernel.org>; Julien Grall <julien@xen.org>;
> > Xen-users@lists.xenproject.org; Wei Chen <Wei.Chen@arm.com>; Penny
> > Zheng <Penny.Zheng@arm.com>; Luca Fancellu <Luca.Fancellu@arm.com>
> > Subject: RE: Aligning Xen to physical memory maps on embedded systems
> >
> > On Tue, 2 Mar 2021, Levenglick Dov wrote:
> > > Thank you.
> > > A few final comments below + one last question regarding the Xilinx
> forks:
> > > Xen 4.13 is first available on the 2020.1 branch. Is it required
> > > that the 2020.1
> > branch of linux-xlnx be used as well, or can I keep the 2019.1 branch
> > that I am currently using?
> >
> > Xilinx recommends to always use the same version everywhere, so 2020.1
> > for Xen, Linux, firwmare, etc.
> >
> > That said, it should be no problem to use Xen 2020.1 with everything
> > else from 2019.1. Given that you are using dom0less, you just need to
> > rebuild the Xen hypervisor alone, you don't even need to update the
> > dom0 rootfs.
> >
> >
> > > > -----Original Message-----
> > > > From: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > > > Sent: Tuesday, March 2, 2021 3:42 AM
> > > > To: Levenglick Dov <Dov.Levenglick@elbitsystems.com>
> > > > Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Stefano
> > > > Stabellini <sstabellini@kernel.org>; Julien Grall
> > > > <julien@xen.org>; Xen- users@lists.xenproject.org; Wei Chen
> > > > <Wei.Chen@arm.com>; Penny Zheng <Penny.Zheng@arm.com>; Luca
> > > > Fancellu <Luca.Fancellu@arm.com>
> > > > Subject: RE: Aligning Xen to physical memory maps on embedded
> > > > systems
> > > >
> > > > On Mon, 1 Mar 2021, Levenglick Dov wrote:
> > > > > > (+ Penny, Wei and Luca)
> > > > > >
> > > > > > > On 23 Feb 2021, at 01:52, Stefano Stabellini
> > > > > > > <sstabellini@kernel.org>
> > > > wrote:
> > > > > > >
> > > > > > > On Mon, 22 Feb 2021, Levenglick Dov wrote:
> > > > > > >>>> The system has 2GB of RAM (0x00000000 - 0x80000000) of
> > > > > > >>>> which Xen and the DomU have an allocation of 1.25GB, per
> > > > > > >>>> this memory
> > > > map:
> > > > > > >>>> 1. DomU1: 0x60000000 - 0x80000000 2. DomU2: 0x40000000 -
> > > > > > >>>> 0x60000000 3. Xen: 0x30000000 - 0x40000000
> > > > > > >>>
> > > > > > >>> How did you tell Xen which regions is assigned to which guests?
> > > > > > >>> Are your domain mapped 1:1 (i.e guest physical address ==
> > > > > > >>> host physical
> > > > > > address)?
> > > > > > >>
> > > > > > >> I am working on a solution where if the "xen,domain" memory
> > > > > > >> has #size-cell cells the content is backward compatible.
> > > > > > >> But if it contains (#address-cells + #size-cells), the
> > > > > > >> address cells should be
> > > > > > considered the physical start address.
> > > > > > >> During the mapping of the entire address space
> > > > > > >> insetup_mm(), the carved out addresses would be added to
> > > > > > >> the reserved memory address space. When the DomU is to be
> > > > > > >> created, this physical space would be mapped to it. The
> > > > > > >> virtual addresses are less of an issue and needn't be
> > > > > > mapped 1x1 (although they could be).
> > > > > > >
> > > > > > > As of today neither upstream Xen nor the Xilinx Xen tree
> > > > > > > come with the feature of allowing the specification of an
> > > > > > > address range for dom0less guests.
> > > > > > >
> > > > > > > The only thing that Xilinx Xen allows, which is not upstream
> > > > > > > yet, is the ability of creating dom0less guests 1:1 mapped
> > > > > > > using the "direct-
> > > > map"
> > > > > > > property. But the memory allocation is still done by Xen
> > > > > > > (you can't select the addresses).
> > > > > > >
> > > > > > > Some time ago I worked on a hacky prototype to allow the
> > > > > > > specification of address ranges, see:
> > > > > > >
> > > > > > > http://xenbits.xenproject.org/git-http/people/sstabellini/xe
> > > > > > > n-unst
> > > > > > > able .git direct-map-2 from
> > > > > > > 7372466b21c3b6c96bb7a52754e432bac883a1e3
> > > > > > onward.
> > > > > > >
> > > > > > > In particular, have a look at "xen/arm: introduce 1:1
> > > > > > > mapping for domUs". The work is not complete: it might not
> > > > > > > work depending on the memory ranges you select for your
> > > > > > > domUs. In particular, you can't select top-of-RAM addresses
> > > > > > > for your domUs. However, it might help you getting started.
> > > > > > >
> > > > > > >
> > > > > > >>>> I am able to support True Dom0-less by means of the
> > > > > > >>>> patch/hack demonstrated By Stefano Stabellini at
> > > > > > >>> https://youtu.be/UfiP9eAV0WA?t=1746.
> > > > > > >>>>
> > > > > > >>>> I was able to forcefully put the Xen binary at the
> > > > > > >>>> address range immediately below 0x40000000 by means of
> > > > > > >>>> modifying
> > > > > > get_xen_paddr()
> > > > > > >>>> -
> > > > > > >>> in itself an ugly hack.
> > > > > > >>>>
> > > > > > >>>> My questions are:
> > > > > > >>>> 1. Since Xen performs runtime allocations from its heap,
> > > > > > >>>> it is
> > > > allocating
> > > > > > >>>> downwards from 0x80000000 - thereby "stealing" memory
> > > > > > >>>> from
> > > > > > DomU1.
> > > > > > >>>
> > > > > > >>> In theory, any memory reserved for domains should have
> > > > > > >>> been carved out from the heap allocator. This would be
> > > > > > >>> sufficient to prevent Xen allocating memory from the
> > > > > > >>> ranges you described
> > > > above.
> > > > > > >>>
> > > > > > >>> Therefore, to me this looks like a bug in the tree you are using.
> > > > > > >>
> > > > > > >> This would be a better approach, but because Xen perform
> > > > > > >> allocations from its heap prior to allocating memory to
> > > > > > >> DomU - and since it allocates from the top of the heap - it
> > > > > > >> is basically taking memory that I
> > > > > > wanted to set aside for the DomU.
> > > > > > >
> > > > > > > Yeah, this is the main problem that my prototype above couldn't
> solve.
> > > > >
> > > > > Stephano: Is the approach that I previously described a feasible one?
> > > > > 1. Mark the addresses that I want to set aside as reserved
> > > > > 2. When reaching the proper DomU, map them and then use the
> > > > > mapping This approach would solve the heap issue
> > > >
> > > > My first suggestion would be actually to let the hypervisor pick
> > > > the address ranges. If you don't change setup, you'll see that
> > > > they are actually stable across reboot. WARNING: Xen doesn't
> > > > promise that they are stable; however, in practice, they are
> > > > stable unless you change device tree or configuration or software
> versions.
> > > >
> > > > That said, yes, I think your approach might work with some limitations
> (e.g.
> > > > Xen reclaiming memory on domU destruction but you probably don't
> > > > care about that). It could be a decent stopgap until we get a better
> solution.
> > >
>
> In our new design, the user defined memory ranges for DomU and memory
> reclaiming on DomU destruction have been considered already. These are
> two features that we really want the community to discuss and get feedback
> on.
>
> > > Is DomU destruction an option on true Dom0-less? Who would be doing
> > > the
> > destruction?
> >
> > Destruction, yes. You should be able to use "xl destroy" in Dom0
> > already today to destroy a dom0less domU. Pass a domid instead of
> > domain name (they don't have a domain name). Of course you need the xl
> > tools in the Xen rootfs for that, so if you are going to update Xen,
> > then you also need to update the Xen tools, hence the Dom0 rootfs. The
> > Xen tools and Xen actually need to be of the same version.
> >
> > If you intend to create again a dom0less domain after destroying it
> > (reboot), then you need to have a config file in dom0 with the same
> > configuration so that you can call xl create.
>
> Cheers,
> Wei Chen
>

The information in this e-mail transmission contains proprietary and business
sensitive information. Unauthorized interception of this e-mail may constitute
a violation of law. If you are not the intended recipient, you are hereby
notified that any review, dissemination, distribution or duplication of this
communication is strictly prohibited. You are also asked to contact the sender
by reply email and immediately destroy all copies of the original message.