Mailing List Archive

[xen master] x86: XENMAPSPACE_gmfn{,_batch,_range} want to special case idx == gpfn
commit ba45ae4d7f3a93ee7297d24bb03a564146d321c6
Author: Jan Beulich <jbeulich@suse.com>
AuthorDate: Fri Oct 23 10:05:29 2020 +0200
Commit: Jan Beulich <jbeulich@suse.com>
CommitDate: Fri Oct 23 10:05:29 2020 +0200

x86: XENMAPSPACE_gmfn{,_batch,_range} want to special case idx == gpfn

In this case up to now we've been freeing the page (through
guest_remove_page(), with the actual free typically happening at the
put_page() later in the function), but then failing the call on the
subsequent GFN consistency check. However, in my opinion such a request
should complete as an "expensive" no-op (leaving aside the potential
unsharing of the page).

This points out that f33d653f46f5 ("x86: replace bad ASSERT() in
xenmem_add_to_physmap_one()" would really have needed an XSA, despite
its description claiming otherwise, as in release builds we then put in
place a P2M entry referencing the about to be freed page.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Roger Pau Monné <roger.pau@citrix.com>
---
xen/arch/x86/mm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 918ee2bbe3..6d2262a3f0 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4599,7 +4599,7 @@ int xenmem_add_to_physmap_one(
if ( is_special_page(mfn_to_page(prev_mfn)) )
/* Special pages are simply unhooked from this phys slot. */
rc = guest_physmap_remove_page(d, gpfn, prev_mfn, PAGE_ORDER_4K);
- else
+ else if ( !mfn_eq(mfn, prev_mfn) )
/* Normal domain memory is freed, to avoid leaking memory. */
rc = guest_remove_page(d, gfn_x(gpfn));
}
--
generated by git-patchbot for /home/xen/git/xen.git#master