Mailing List Archive

[PATCH] fix zone-over-node preference when allocating memory
When Xen allocates the guest's memory, it will try to use non-DMA-able
zones first (probably because they are less precious). If there are no
such pages available on a certain node, Xen will revert to allocating
low pages from another node and thus ignoring the node-preference. This
patch fixes this by first checking if non-DMA pages are available on a
node and reverting to DMA-able pages if not. This fixes incorrect NUMA
memory allocation on nodes with memory below the DMA border (4GB on
x86-64, affects for instance dual-node machines with 4gig on each node).

Andre.

P.S. This fix was already part of my NUMA guest patches back in August,
this is just an extract of these.

Signed-off-by: Andre Przywara <andre.przywara@amd.com>

--
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany
Tel: +49 351 277-84917
----to satisfy European Law for business letters:
AMD Saxony Limited Liability Company & Co. KG,
Wilschdorfer Landstr. 101, 01109 Dresden, Germany
Register Court Dresden: HRA 4896, General Partner authorized
to represent: AMD Saxony LLC (Wilmington, Delaware, US)
General Manager of AMD Saxony LLC: Dr. Hans-R. Deppe, Thomas McCoy
Re: [PATCH] fix zone-over-node preference when allocating memory [ In reply to ]
You have no guarantee that the DMA pool memory belongs to the allocating
node either (although it happens to be the case in the scenario you are
trying to fix). Instead I suggest that default dma_bitsize should depend on
NUMA characteristics of the system. For example, we could specify that
dma_bitsize should not cover more than 25% of the memory of any one NUMA
node. In your example this would cause you to have dma_bitsize=30.

I was going to suggest we get rid of dma_bitsize now we have the
per-bitwidth zones, but actually it probably is needed specifically for NUMA
systems. If we have one NUMA node with all memory below 4GB, we'd probably
like it to fall back to allocating memory from other nodes before it
exhausts all the available below-4GB memory.

-- Keir

On 21/12/07 22:27, "Andre Przywara" <andre.przywara@amd.com> wrote:

> When Xen allocates the guest's memory, it will try to use non-DMA-able
> zones first (probably because they are less precious). If there are no
> such pages available on a certain node, Xen will revert to allocating
> low pages from another node and thus ignoring the node-preference. This
> patch fixes this by first checking if non-DMA pages are available on a
> node and reverting to DMA-able pages if not. This fixes incorrect NUMA
> memory allocation on nodes with memory below the DMA border (4GB on
> x86-64, affects for instance dual-node machines with 4gig on each node).
>
> Andre.
>
> P.S. This fix was already part of my NUMA guest patches back in August,
> this is just an extract of these.
>
> Signed-off-by: Andre Przywara <andre.przywara@amd.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel