Mailing List Archive

[PATCH] x86/mm: Update comments now that Xen is always 64-bit
# HG changeset patch
# User Tim Deegan <tim@xen.org>
# Date 1347544826 -3600
# Node ID 9e46d90d0182979a7220314ca19d2525e338aa5d
# Parent a770d1c8448d73ccf2ec36a5322532c2e3c14641
x86/mm: Update comments now that Xen is always 64-bit.

Signed-off-by: Tim Deegan <tim@xen.org>

diff -r a770d1c8448d -r 9e46d90d0182 xen/arch/x86/mm/shadow/common.c
--- a/xen/arch/x86/mm/shadow/common.c Thu Sep 13 15:00:24 2012 +0100
+++ b/xen/arch/x86/mm/shadow/common.c Thu Sep 13 15:00:26 2012 +0100
@@ -1178,19 +1178,19 @@ int shadow_cmpxchg_guest_entry(struct vc
*
* This table shows the allocation behaviour of the different modes:
*
- * Xen paging pae pae 64b 64b 64b
- * Guest paging 32b pae 32b pae 64b
- * PV or HVM HVM * HVM HVM *
- * Shadow paging pae pae pae pae 64b
+ * Xen paging 64b 64b 64b
+ * Guest paging 32b pae 64b
+ * PV or HVM HVM HVM *
+ * Shadow paging pae pae 64b
*
- * sl1 size 8k 4k 8k 4k 4k
- * sl2 size 16k 4k 16k 4k 4k
- * sl3 size - - - - 4k
- * sl4 size - - - - 4k
+ * sl1 size 8k 4k 4k
+ * sl2 size 16k 4k 4k
+ * sl3 size - - 4k
+ * sl4 size - - 4k
*
* In HVM guests, the p2m table is built out of shadow pages, and we provide
* a function for the p2m management to steal pages, in max-order chunks, from
- * the free pool. We don't provide for giving them back, yet.
+ * the free pool.
*/

/* Figure out the least acceptable quantity of shadow memory.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel