Mailing List Archive

[PATCH 1/5] xen/netback: Use xenballooned pages for comms
For proper grant mappings, HVM guests require pages allocated using
alloc_xenballooned_pages instead of alloc_vm_area.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
drivers/net/xen-netback/common.h | 4 ++--
drivers/net/xen-netback/netback.c | 34 ++++++++++++++++++++--------------
2 files changed, 22 insertions(+), 16 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 161f207..d5ee9d1 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -70,8 +70,8 @@ struct xenvif {
/* The shared rings and indexes. */
struct xen_netif_tx_back_ring tx;
struct xen_netif_rx_back_ring rx;
- struct vm_struct *tx_comms_area;
- struct vm_struct *rx_comms_area;
+ struct page *tx_comms_page;
+ struct page *rx_comms_page;

/* Frontend feature information. */
u8 can_sg:1;
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index fd00f25..f35e07c 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -42,6 +42,7 @@

#include <xen/events.h>
#include <xen/interface/memory.h>
+#include <xen/balloon.h>

#include <asm/xen/hypercall.h>
#include <asm/xen/page.h>
@@ -1578,9 +1579,11 @@ static int xen_netbk_kthread(void *data)
void xen_netbk_unmap_frontend_rings(struct xenvif *vif)
{
struct gnttab_unmap_grant_ref op;
+ void *addr;

if (vif->tx.sring) {
- gnttab_set_unmap_op(&op, (unsigned long)vif->tx_comms_area->addr,
+ addr = pfn_to_kaddr(page_to_pfn(vif->tx_comms_page));
+ gnttab_set_unmap_op(&op, (unsigned long)addr,
GNTMAP_host_map, vif->tx_shmem_handle);

if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, &op, 1))
@@ -1588,16 +1591,17 @@ void xen_netbk_unmap_frontend_rings(struct xenvif *vif)
}

if (vif->rx.sring) {
- gnttab_set_unmap_op(&op, (unsigned long)vif->rx_comms_area->addr,
+ addr = pfn_to_kaddr(page_to_pfn(vif->rx_comms_page));
+ gnttab_set_unmap_op(&op, (unsigned long)addr,
GNTMAP_host_map, vif->rx_shmem_handle);

if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, &op, 1))
BUG();
}
- if (vif->rx_comms_area)
- free_vm_area(vif->rx_comms_area);
- if (vif->tx_comms_area)
- free_vm_area(vif->tx_comms_area);
+ if (vif->rx_comms_page)
+ free_xenballooned_pages(1, &vif->rx_comms_page);
+ if (vif->tx_comms_page)
+ free_xenballooned_pages(1, &vif->tx_comms_page);
}

int xen_netbk_map_frontend_rings(struct xenvif *vif,
@@ -1610,15 +1614,19 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif,

int err = -ENOMEM;

- vif->tx_comms_area = alloc_vm_area(PAGE_SIZE);
- if (vif->tx_comms_area == NULL)
+ if (alloc_xenballooned_pages(1, &vif->tx_comms_page))
goto err;

- vif->rx_comms_area = alloc_vm_area(PAGE_SIZE);
- if (vif->rx_comms_area == NULL)
+ txs = (struct xen_netif_tx_sring *)pfn_to_kaddr(page_to_pfn(
+ vif->tx_comms_page));
+
+ if (alloc_xenballooned_pages(1, &vif->rx_comms_page))
goto err;

- gnttab_set_map_op(&op, (unsigned long)vif->tx_comms_area->addr,
+ rxs = (struct xen_netif_rx_sring *)pfn_to_kaddr(page_to_pfn(
+ vif->rx_comms_page));
+
+ gnttab_set_map_op(&op, (unsigned long)txs,
GNTMAP_host_map, tx_ring_ref, vif->domid);

if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1))
@@ -1635,10 +1643,9 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif,
vif->tx_shmem_ref = tx_ring_ref;
vif->tx_shmem_handle = op.handle;

- txs = (struct xen_netif_tx_sring *)vif->tx_comms_area->addr;
BACK_RING_INIT(&vif->tx, txs, PAGE_SIZE);

- gnttab_set_map_op(&op, (unsigned long)vif->rx_comms_area->addr,
+ gnttab_set_map_op(&op, (unsigned long)rxs,
GNTMAP_host_map, rx_ring_ref, vif->domid);

if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1))
@@ -1656,7 +1663,6 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif,
vif->rx_shmem_handle = op.handle;
vif->rx_req_cons_peek = 0;

- rxs = (struct xen_netif_rx_sring *)vif->rx_comms_area->addr;
BACK_RING_INIT(&vif->rx, rxs, PAGE_SIZE);

return 0;
--
1.7.6.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [PATCH 1/5] xen/netback: Use xenballooned pages for comms [ In reply to ]
On Tue, 2011-10-18 at 21:26 +0100, Daniel De Graaf wrote:
> For proper grant mappings, HVM guests require pages allocated using
> alloc_xenballooned_pages instead of alloc_vm_area.
>
> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> ---
> drivers/net/xen-netback/common.h | 4 ++--
> drivers/net/xen-netback/netback.c | 34 ++++++++++++++++++++--------------
> 2 files changed, 22 insertions(+), 16 deletions(-)
>
> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
> index 161f207..d5ee9d1 100644
> --- a/drivers/net/xen-netback/common.h
> +++ b/drivers/net/xen-netback/common.h
> @@ -70,8 +70,8 @@ struct xenvif {
> /* The shared rings and indexes. */
> struct xen_netif_tx_back_ring tx;
> struct xen_netif_rx_back_ring rx;
> - struct vm_struct *tx_comms_area;
> - struct vm_struct *rx_comms_area;
> + struct page *tx_comms_page;
> + struct page *rx_comms_page;

This will conflict with David Vrabel's patch "net: xen-netback: use API
provided by xenbus module to map rings", which I've just noticed hasn't
been committed anywhere.

I suspect that building on David's patches (that series does something
similar to blkback too) will greatly simplify this one since you can
just patch xenbus_map_ring_valloc and friends.

Could you also explain where the requirement to use xenballooned pages
and not alloc_vm_area comes from in your commit message.

David, I guess you should resend your series now that everyone is happy
with it. If you cc the netback one to netdev@ with my Ack then Dave
Miller will pick it up into his tree (it stands alone, right?). The
blkback and grant-table ones go via Konrad I think. I suspect the last
one needs to go via akpm, or at least with his Ack.

>
> /* Frontend feature information. */
> u8 can_sg:1;
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> index fd00f25..f35e07c 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -42,6 +42,7 @@
>
> #include <xen/events.h>
> #include <xen/interface/memory.h>
> +#include <xen/balloon.h>
>
> #include <asm/xen/hypercall.h>
> #include <asm/xen/page.h>
> @@ -1578,9 +1579,11 @@ static int xen_netbk_kthread(void *data)
> void xen_netbk_unmap_frontend_rings(struct xenvif *vif)
> {
> struct gnttab_unmap_grant_ref op;
> + void *addr;
>
> if (vif->tx.sring) {
> - gnttab_set_unmap_op(&op, (unsigned long)vif->tx_comms_area->addr,
> + addr = pfn_to_kaddr(page_to_pfn(vif->tx_comms_page));
> + gnttab_set_unmap_op(&op, (unsigned long)addr,
> GNTMAP_host_map, vif->tx_shmem_handle);
>
> if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, &op, 1))
> @@ -1588,16 +1591,17 @@ void xen_netbk_unmap_frontend_rings(struct xenvif *vif)
> }
>
> if (vif->rx.sring) {
> - gnttab_set_unmap_op(&op, (unsigned long)vif->rx_comms_area->addr,
> + addr = pfn_to_kaddr(page_to_pfn(vif->rx_comms_page));
> + gnttab_set_unmap_op(&op, (unsigned long)addr,
> GNTMAP_host_map, vif->rx_shmem_handle);
>
> if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, &op, 1))
> BUG();
> }
> - if (vif->rx_comms_area)
> - free_vm_area(vif->rx_comms_area);
> - if (vif->tx_comms_area)
> - free_vm_area(vif->tx_comms_area);
> + if (vif->rx_comms_page)
> + free_xenballooned_pages(1, &vif->rx_comms_page);
> + if (vif->tx_comms_page)
> + free_xenballooned_pages(1, &vif->tx_comms_page);
> }
>
> int xen_netbk_map_frontend_rings(struct xenvif *vif,
> @@ -1610,15 +1614,19 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif,
>
> int err = -ENOMEM;
>
> - vif->tx_comms_area = alloc_vm_area(PAGE_SIZE);
> - if (vif->tx_comms_area == NULL)
> + if (alloc_xenballooned_pages(1, &vif->tx_comms_page))
> goto err;
>
> - vif->rx_comms_area = alloc_vm_area(PAGE_SIZE);
> - if (vif->rx_comms_area == NULL)
> + txs = (struct xen_netif_tx_sring *)pfn_to_kaddr(page_to_pfn(
> + vif->tx_comms_page));
> +
> + if (alloc_xenballooned_pages(1, &vif->rx_comms_page))
> goto err;
>
> - gnttab_set_map_op(&op, (unsigned long)vif->tx_comms_area->addr,
> + rxs = (struct xen_netif_rx_sring *)pfn_to_kaddr(page_to_pfn(
> + vif->rx_comms_page));
> +
> + gnttab_set_map_op(&op, (unsigned long)txs,
> GNTMAP_host_map, tx_ring_ref, vif->domid);
>
> if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1))
> @@ -1635,10 +1643,9 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif,
> vif->tx_shmem_ref = tx_ring_ref;
> vif->tx_shmem_handle = op.handle;
>
> - txs = (struct xen_netif_tx_sring *)vif->tx_comms_area->addr;
> BACK_RING_INIT(&vif->tx, txs, PAGE_SIZE);
>
> - gnttab_set_map_op(&op, (unsigned long)vif->rx_comms_area->addr,
> + gnttab_set_map_op(&op, (unsigned long)rxs,
> GNTMAP_host_map, rx_ring_ref, vif->domid);
>
> if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1))
> @@ -1656,7 +1663,6 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif,
> vif->rx_shmem_handle = op.handle;
> vif->rx_req_cons_peek = 0;
>
> - rxs = (struct xen_netif_rx_sring *)vif->rx_comms_area->addr;
> BACK_RING_INIT(&vif->rx, rxs, PAGE_SIZE);
>
> return 0;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [PATCH 1/5] xen/netback: Use xenballooned pages for comms [ In reply to ]
On 19/10/11 10:04, Ian Campbell wrote:
> On Tue, 2011-10-18 at 21:26 +0100, Daniel De Graaf wrote:
>> For proper grant mappings, HVM guests require pages allocated using
>> alloc_xenballooned_pages instead of alloc_vm_area.
>>
>> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
>> ---
>> drivers/net/xen-netback/common.h | 4 ++--
>> drivers/net/xen-netback/netback.c | 34 ++++++++++++++++++++--------------
>> 2 files changed, 22 insertions(+), 16 deletions(-)
>>
>> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
>> index 161f207..d5ee9d1 100644
>> --- a/drivers/net/xen-netback/common.h
>> +++ b/drivers/net/xen-netback/common.h
>> @@ -70,8 +70,8 @@ struct xenvif {
>> /* The shared rings and indexes. */
>> struct xen_netif_tx_back_ring tx;
>> struct xen_netif_rx_back_ring rx;
>> - struct vm_struct *tx_comms_area;
>> - struct vm_struct *rx_comms_area;
>> + struct page *tx_comms_page;
>> + struct page *rx_comms_page;
>
> This will conflict with David Vrabel's patch "net: xen-netback: use API
> provided by xenbus module to map rings", which I've just noticed hasn't
> been committed anywhere.
>
> I suspect that building on David's patches (that series does something
> similar to blkback too) will greatly simplify this one since you can
> just patch xenbus_map_ring_valloc and friends.
>
> Could you also explain where the requirement to use xenballooned pages
> and not alloc_vm_area comes from in your commit message.
>
> David, I guess you should resend your series now that everyone is happy
> with it. If you cc the netback one to netdev@ with my Ack then Dave
> Miller will pick it up into his tree (it stands alone, right?). The
> blkback and grant-table ones go via Konrad I think. I suspect the last
> one needs to go via akpm, or at least with his Ack.

I thought Konrad had picked them all up -- they were on his stuff queued
for 3.2 list.

>
>>
>> /* Frontend feature information. */
>> u8 can_sg:1;
>> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
>> index fd00f25..f35e07c 100644
>> --- a/drivers/net/xen-netback/netback.c
>> +++ b/drivers/net/xen-netback/netback.c
>> @@ -42,6 +42,7 @@
>>
>> #include <xen/events.h>
>> #include <xen/interface/memory.h>
>> +#include <xen/balloon.h>
>>
>> #include <asm/xen/hypercall.h>
>> #include <asm/xen/page.h>
>> @@ -1578,9 +1579,11 @@ static int xen_netbk_kthread(void *data)
>> void xen_netbk_unmap_frontend_rings(struct xenvif *vif)
>> {
>> struct gnttab_unmap_grant_ref op;
>> + void *addr;
>>
>> if (vif->tx.sring) {
>> - gnttab_set_unmap_op(&op, (unsigned long)vif->tx_comms_area->addr,
>> + addr = pfn_to_kaddr(page_to_pfn(vif->tx_comms_page));
>> + gnttab_set_unmap_op(&op, (unsigned long)addr,
>> GNTMAP_host_map, vif->tx_shmem_handle);
>>
>> if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, &op, 1))
>> @@ -1588,16 +1591,17 @@ void xen_netbk_unmap_frontend_rings(struct xenvif *vif)
>> }
>>
>> if (vif->rx.sring) {
>> - gnttab_set_unmap_op(&op, (unsigned long)vif->rx_comms_area->addr,
>> + addr = pfn_to_kaddr(page_to_pfn(vif->rx_comms_page));
>> + gnttab_set_unmap_op(&op, (unsigned long)addr,
>> GNTMAP_host_map, vif->rx_shmem_handle);
>>
>> if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, &op, 1))
>> BUG();
>> }
>> - if (vif->rx_comms_area)
>> - free_vm_area(vif->rx_comms_area);
>> - if (vif->tx_comms_area)
>> - free_vm_area(vif->tx_comms_area);
>> + if (vif->rx_comms_page)
>> + free_xenballooned_pages(1, &vif->rx_comms_page);
>> + if (vif->tx_comms_page)
>> + free_xenballooned_pages(1, &vif->tx_comms_page);
>> }
>>
>> int xen_netbk_map_frontend_rings(struct xenvif *vif,
>> @@ -1610,15 +1614,19 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif,
>>
>> int err = -ENOMEM;
>>
>> - vif->tx_comms_area = alloc_vm_area(PAGE_SIZE);
>> - if (vif->tx_comms_area == NULL)
>> + if (alloc_xenballooned_pages(1, &vif->tx_comms_page))
>> goto err;
>>
>> - vif->rx_comms_area = alloc_vm_area(PAGE_SIZE);
>> - if (vif->rx_comms_area == NULL)
>> + txs = (struct xen_netif_tx_sring *)pfn_to_kaddr(page_to_pfn(
>> + vif->tx_comms_page));
>> +
>> + if (alloc_xenballooned_pages(1, &vif->rx_comms_page))
>> goto err;
>>
>> - gnttab_set_map_op(&op, (unsigned long)vif->tx_comms_area->addr,
>> + rxs = (struct xen_netif_rx_sring *)pfn_to_kaddr(page_to_pfn(
>> + vif->rx_comms_page));
>> +
>> + gnttab_set_map_op(&op, (unsigned long)txs,
>> GNTMAP_host_map, tx_ring_ref, vif->domid);
>>
>> if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1))
>> @@ -1635,10 +1643,9 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif,
>> vif->tx_shmem_ref = tx_ring_ref;
>> vif->tx_shmem_handle = op.handle;
>>
>> - txs = (struct xen_netif_tx_sring *)vif->tx_comms_area->addr;
>> BACK_RING_INIT(&vif->tx, txs, PAGE_SIZE);
>>
>> - gnttab_set_map_op(&op, (unsigned long)vif->rx_comms_area->addr,
>> + gnttab_set_map_op(&op, (unsigned long)rxs,
>> GNTMAP_host_map, rx_ring_ref, vif->domid);
>>
>> if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1))
>> @@ -1656,7 +1663,6 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif,
>> vif->rx_shmem_handle = op.handle;
>> vif->rx_req_cons_peek = 0;
>>
>> - rxs = (struct xen_netif_rx_sring *)vif->rx_comms_area->addr;
>> BACK_RING_INIT(&vif->rx, rxs, PAGE_SIZE);
>>
>> return 0;
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [PATCH 1/5] xen/netback: Use xenballooned pages for comms [ In reply to ]
On 10/19/2011 05:04 AM, Ian Campbell wrote:
> On Tue, 2011-10-18 at 21:26 +0100, Daniel De Graaf wrote:
>> For proper grant mappings, HVM guests require pages allocated using
>> alloc_xenballooned_pages instead of alloc_vm_area.
>>
>> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
>> ---
>> drivers/net/xen-netback/common.h | 4 ++--
>> drivers/net/xen-netback/netback.c | 34 ++++++++++++++++++++--------------
>> 2 files changed, 22 insertions(+), 16 deletions(-)
>>
>> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
>> index 161f207..d5ee9d1 100644
>> --- a/drivers/net/xen-netback/common.h
>> +++ b/drivers/net/xen-netback/common.h
>> @@ -70,8 +70,8 @@ struct xenvif {
>> /* The shared rings and indexes. */
>> struct xen_netif_tx_back_ring tx;
>> struct xen_netif_rx_back_ring rx;
>> - struct vm_struct *tx_comms_area;
>> - struct vm_struct *rx_comms_area;
>> + struct page *tx_comms_page;
>> + struct page *rx_comms_page;
>
> This will conflict with David Vrabel's patch "net: xen-netback: use API
> provided by xenbus module to map rings", which I've just noticed hasn't
> been committed anywhere.
>
> I suspect that building on David's patches (that series does something
> similar to blkback too) will greatly simplify this one since you can
> just patch xenbus_map_ring_valloc and friends.

Looks like that should be possible; I didn't see that there was already
an attempt to centralize the mappings.

It seems like the best place to modify is xen_alloc_vm_area, which should
be used in place of alloc_vm_area for grant mappings. On HVM, this area
needs valid PFNs allocated in the guest, which are allocated from the
balloon driver.

> Could you also explain where the requirement to use xenballooned pages
> and not alloc_vm_area comes from in your commit message.

(Will move to commit message). In PV guests, it is sufficient to only
reserve kernel address space for grant mappings because Xen modifies the
mappings directly. HVM guests require that Xen modify the GFN-to-MFN
mapping, so the pages being remapped must already be allocated. Pages
obtained from alloc_xenballooned_pages have valid GFNs not currently
mapped to an MFN, so are available to be used in grant mappings.

> David, I guess you should resend your series now that everyone is happy
> with it. If you cc the netback one to netdev@ with my Ack then Dave
> Miller will pick it up into his tree (it stands alone, right?). The
> blkback and grant-table ones go via Konrad I think. I suspect the last
> one needs to go via akpm, or at least with his Ack.
>
>>
>> /* Frontend feature information. */
>> u8 can_sg:1;
>> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
>> index fd00f25..f35e07c 100644
>> --- a/drivers/net/xen-netback/netback.c
>> +++ b/drivers/net/xen-netback/netback.c
>> @@ -42,6 +42,7 @@
>>
>> #include <xen/events.h>
>> #include <xen/interface/memory.h>
>> +#include <xen/balloon.h>
>>
>> #include <asm/xen/hypercall.h>
>> #include <asm/xen/page.h>
>> @@ -1578,9 +1579,11 @@ static int xen_netbk_kthread(void *data)
>> void xen_netbk_unmap_frontend_rings(struct xenvif *vif)
>> {
>> struct gnttab_unmap_grant_ref op;
>> + void *addr;
>>
>> if (vif->tx.sring) {
>> - gnttab_set_unmap_op(&op, (unsigned long)vif->tx_comms_area->addr,
>> + addr = pfn_to_kaddr(page_to_pfn(vif->tx_comms_page));
>> + gnttab_set_unmap_op(&op, (unsigned long)addr,
>> GNTMAP_host_map, vif->tx_shmem_handle);
>>
>> if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, &op, 1))
>> @@ -1588,16 +1591,17 @@ void xen_netbk_unmap_frontend_rings(struct xenvif *vif)
>> }
>>
>> if (vif->rx.sring) {
>> - gnttab_set_unmap_op(&op, (unsigned long)vif->rx_comms_area->addr,
>> + addr = pfn_to_kaddr(page_to_pfn(vif->rx_comms_page));
>> + gnttab_set_unmap_op(&op, (unsigned long)addr,
>> GNTMAP_host_map, vif->rx_shmem_handle);
>>
>> if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, &op, 1))
>> BUG();
>> }
>> - if (vif->rx_comms_area)
>> - free_vm_area(vif->rx_comms_area);
>> - if (vif->tx_comms_area)
>> - free_vm_area(vif->tx_comms_area);
>> + if (vif->rx_comms_page)
>> + free_xenballooned_pages(1, &vif->rx_comms_page);
>> + if (vif->tx_comms_page)
>> + free_xenballooned_pages(1, &vif->tx_comms_page);
>> }
>>
>> int xen_netbk_map_frontend_rings(struct xenvif *vif,
>> @@ -1610,15 +1614,19 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif,
>>
>> int err = -ENOMEM;
>>
>> - vif->tx_comms_area = alloc_vm_area(PAGE_SIZE);
>> - if (vif->tx_comms_area == NULL)
>> + if (alloc_xenballooned_pages(1, &vif->tx_comms_page))
>> goto err;
>>
>> - vif->rx_comms_area = alloc_vm_area(PAGE_SIZE);
>> - if (vif->rx_comms_area == NULL)
>> + txs = (struct xen_netif_tx_sring *)pfn_to_kaddr(page_to_pfn(
>> + vif->tx_comms_page));
>> +
>> + if (alloc_xenballooned_pages(1, &vif->rx_comms_page))
>> goto err;
>>
>> - gnttab_set_map_op(&op, (unsigned long)vif->tx_comms_area->addr,
>> + rxs = (struct xen_netif_rx_sring *)pfn_to_kaddr(page_to_pfn(
>> + vif->rx_comms_page));
>> +
>> + gnttab_set_map_op(&op, (unsigned long)txs,
>> GNTMAP_host_map, tx_ring_ref, vif->domid);
>>
>> if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1))
>> @@ -1635,10 +1643,9 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif,
>> vif->tx_shmem_ref = tx_ring_ref;
>> vif->tx_shmem_handle = op.handle;
>>
>> - txs = (struct xen_netif_tx_sring *)vif->tx_comms_area->addr;
>> BACK_RING_INIT(&vif->tx, txs, PAGE_SIZE);
>>
>> - gnttab_set_map_op(&op, (unsigned long)vif->rx_comms_area->addr,
>> + gnttab_set_map_op(&op, (unsigned long)rxs,
>> GNTMAP_host_map, rx_ring_ref, vif->domid);
>>
>> if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1))
>> @@ -1656,7 +1663,6 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif,
>> vif->rx_shmem_handle = op.handle;
>> vif->rx_req_cons_peek = 0;
>>
>> - rxs = (struct xen_netif_rx_sring *)vif->rx_comms_area->addr;
>> BACK_RING_INIT(&vif->rx, rxs, PAGE_SIZE);
>>
>> return 0;
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [PATCH 1/5] xen/netback: Use xenballooned pages for comms [ In reply to ]
On Wed, 2011-10-19 at 11:39 +0100, David Vrabel wrote:
> On 19/10/11 10:04, Ian Campbell wrote:
> > On Tue, 2011-10-18 at 21:26 +0100, Daniel De Graaf wrote:
> >> For proper grant mappings, HVM guests require pages allocated using
> >> alloc_xenballooned_pages instead of alloc_vm_area.
> >>
> >> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> >> ---
> >> drivers/net/xen-netback/common.h | 4 ++--
> >> drivers/net/xen-netback/netback.c | 34 ++++++++++++++++++++--------------
> >> 2 files changed, 22 insertions(+), 16 deletions(-)
> >>
> >> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
> >> index 161f207..d5ee9d1 100644
> >> --- a/drivers/net/xen-netback/common.h
> >> +++ b/drivers/net/xen-netback/common.h
> >> @@ -70,8 +70,8 @@ struct xenvif {
> >> /* The shared rings and indexes. */
> >> struct xen_netif_tx_back_ring tx;
> >> struct xen_netif_rx_back_ring rx;
> >> - struct vm_struct *tx_comms_area;
> >> - struct vm_struct *rx_comms_area;
> >> + struct page *tx_comms_page;
> >> + struct page *rx_comms_page;
> >
> > This will conflict with David Vrabel's patch "net: xen-netback: use API
> > provided by xenbus module to map rings", which I've just noticed hasn't
> > been committed anywhere.
> >
> > I suspect that building on David's patches (that series does something
> > similar to blkback too) will greatly simplify this one since you can
> > just patch xenbus_map_ring_valloc and friends.
> >
> > Could you also explain where the requirement to use xenballooned pages
> > and not alloc_vm_area comes from in your commit message.
> >
> > David, I guess you should resend your series now that everyone is happy
> > with it. If you cc the netback one to netdev@ with my Ack then Dave
> > Miller will pick it up into his tree (it stands alone, right?). The
> > blkback and grant-table ones go via Konrad I think. I suspect the last
> > one needs to go via akpm, or at least with his Ack.
>
> I thought Konrad had picked them all up -- they were on his stuff queued
> for 3.2 list.

Perhaps, git://oss.oracle.com doesn't seem to be responding so I can't
tell. In any case the netback stuff ought to go via David Miller.

Ian.

>
> >
> >>
> >> /* Frontend feature information. */
> >> u8 can_sg:1;
> >> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> >> index fd00f25..f35e07c 100644
> >> --- a/drivers/net/xen-netback/netback.c
> >> +++ b/drivers/net/xen-netback/netback.c
> >> @@ -42,6 +42,7 @@
> >>
> >> #include <xen/events.h>
> >> #include <xen/interface/memory.h>
> >> +#include <xen/balloon.h>
> >>
> >> #include <asm/xen/hypercall.h>
> >> #include <asm/xen/page.h>
> >> @@ -1578,9 +1579,11 @@ static int xen_netbk_kthread(void *data)
> >> void xen_netbk_unmap_frontend_rings(struct xenvif *vif)
> >> {
> >> struct gnttab_unmap_grant_ref op;
> >> + void *addr;
> >>
> >> if (vif->tx.sring) {
> >> - gnttab_set_unmap_op(&op, (unsigned long)vif->tx_comms_area->addr,
> >> + addr = pfn_to_kaddr(page_to_pfn(vif->tx_comms_page));
> >> + gnttab_set_unmap_op(&op, (unsigned long)addr,
> >> GNTMAP_host_map, vif->tx_shmem_handle);
> >>
> >> if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, &op, 1))
> >> @@ -1588,16 +1591,17 @@ void xen_netbk_unmap_frontend_rings(struct xenvif *vif)
> >> }
> >>
> >> if (vif->rx.sring) {
> >> - gnttab_set_unmap_op(&op, (unsigned long)vif->rx_comms_area->addr,
> >> + addr = pfn_to_kaddr(page_to_pfn(vif->rx_comms_page));
> >> + gnttab_set_unmap_op(&op, (unsigned long)addr,
> >> GNTMAP_host_map, vif->rx_shmem_handle);
> >>
> >> if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, &op, 1))
> >> BUG();
> >> }
> >> - if (vif->rx_comms_area)
> >> - free_vm_area(vif->rx_comms_area);
> >> - if (vif->tx_comms_area)
> >> - free_vm_area(vif->tx_comms_area);
> >> + if (vif->rx_comms_page)
> >> + free_xenballooned_pages(1, &vif->rx_comms_page);
> >> + if (vif->tx_comms_page)
> >> + free_xenballooned_pages(1, &vif->tx_comms_page);
> >> }
> >>
> >> int xen_netbk_map_frontend_rings(struct xenvif *vif,
> >> @@ -1610,15 +1614,19 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif,
> >>
> >> int err = -ENOMEM;
> >>
> >> - vif->tx_comms_area = alloc_vm_area(PAGE_SIZE);
> >> - if (vif->tx_comms_area == NULL)
> >> + if (alloc_xenballooned_pages(1, &vif->tx_comms_page))
> >> goto err;
> >>
> >> - vif->rx_comms_area = alloc_vm_area(PAGE_SIZE);
> >> - if (vif->rx_comms_area == NULL)
> >> + txs = (struct xen_netif_tx_sring *)pfn_to_kaddr(page_to_pfn(
> >> + vif->tx_comms_page));
> >> +
> >> + if (alloc_xenballooned_pages(1, &vif->rx_comms_page))
> >> goto err;
> >>
> >> - gnttab_set_map_op(&op, (unsigned long)vif->tx_comms_area->addr,
> >> + rxs = (struct xen_netif_rx_sring *)pfn_to_kaddr(page_to_pfn(
> >> + vif->rx_comms_page));
> >> +
> >> + gnttab_set_map_op(&op, (unsigned long)txs,
> >> GNTMAP_host_map, tx_ring_ref, vif->domid);
> >>
> >> if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1))
> >> @@ -1635,10 +1643,9 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif,
> >> vif->tx_shmem_ref = tx_ring_ref;
> >> vif->tx_shmem_handle = op.handle;
> >>
> >> - txs = (struct xen_netif_tx_sring *)vif->tx_comms_area->addr;
> >> BACK_RING_INIT(&vif->tx, txs, PAGE_SIZE);
> >>
> >> - gnttab_set_map_op(&op, (unsigned long)vif->rx_comms_area->addr,
> >> + gnttab_set_map_op(&op, (unsigned long)rxs,
> >> GNTMAP_host_map, rx_ring_ref, vif->domid);
> >>
> >> if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1))
> >> @@ -1656,7 +1663,6 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif,
> >> vif->rx_shmem_handle = op.handle;
> >> vif->rx_req_cons_peek = 0;
> >>
> >> - rxs = (struct xen_netif_rx_sring *)vif->rx_comms_area->addr;
> >> BACK_RING_INIT(&vif->rx, rxs, PAGE_SIZE);
> >>
> >> return 0;
> >
> >
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [PATCH 1/5] xen/netback: Use xenballooned pages for comms [ In reply to ]
On 19/10/11 16:01, Daniel De Graaf wrote:
> On 10/19/2011 05:04 AM, Ian Campbell wrote:
>> On Tue, 2011-10-18 at 21:26 +0100, Daniel De Graaf wrote:
>>> For proper grant mappings, HVM guests require pages allocated using
>>> alloc_xenballooned_pages instead of alloc_vm_area.
>>>
>>> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
>>> ---
[...]
>> Could you also explain where the requirement to use xenballooned pages
>> and not alloc_vm_area comes from in your commit message.
>
> (Will move to commit message). In PV guests, it is sufficient to only
> reserve kernel address space for grant mappings because Xen modifies the
> mappings directly. HVM guests require that Xen modify the GFN-to-MFN
> mapping, so the pages being remapped must already be allocated. Pages
> obtained from alloc_xenballooned_pages have valid GFNs not currently
> mapped to an MFN, so are available to be used in grant mappings.

Why doesn't (or can't?) Xen add new entries to the GFN-to-MFN map? Or
why hasn't it reserved a range of GFNs in the map for this?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [PATCH 1/5] xen/netback: Use xenballooned pages for comms [ In reply to ]
On 10/19/2011 12:32 PM, David Vrabel wrote:
> On 19/10/11 16:01, Daniel De Graaf wrote:
>> On 10/19/2011 05:04 AM, Ian Campbell wrote:
>>> On Tue, 2011-10-18 at 21:26 +0100, Daniel De Graaf wrote:
>>>> For proper grant mappings, HVM guests require pages allocated using
>>>> alloc_xenballooned_pages instead of alloc_vm_area.
>>>>
>>>> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
>>>> ---
> [...]
>>> Could you also explain where the requirement to use xenballooned pages
>>> and not alloc_vm_area comes from in your commit message.
>>
>> (Will move to commit message). In PV guests, it is sufficient to only
>> reserve kernel address space for grant mappings because Xen modifies the
>> mappings directly. HVM guests require that Xen modify the GFN-to-MFN
>> mapping, so the pages being remapped must already be allocated. Pages
>> obtained from alloc_xenballooned_pages have valid GFNs not currently
>> mapped to an MFN, so are available to be used in grant mappings.
>
> Why doesn't (or can't?) Xen add new entries to the GFN-to-MFN map? Or
> why hasn't it reserved a range of GFNs in the map for this?
>
> David
>

That would be another way for Xen to solve this, but it would require
that the reserved GFN range be large enough for all mappings the guest
does, and would also need to be managed by the hypervisor. By allowing
the guest to specify which GFN to remap in the grant operation, the
guest can use any of the memory was either not populated at startup or
was returned to the hypervisor via XENMEM_decrease_reservation. For
32-bit guests with highmem, this also allows the guest to choose if the
grant-mapped pages are in high or low memory rather than forcing it to
be where the reserved GFN range ended up. Such a change would also
break API compatibility since the guest would need to read GFNs back
from the grant operation and map those GFNs.

--
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [PATCH 1/5] xen/netback: Use xenballooned pages for comms [ In reply to ]
> (Will move to commit message). In PV guests, it is sufficient to only
> reserve kernel address space for grant mappings because Xen modifies the
> mappings directly. HVM guests require that Xen modify the GFN-to-MFN
> mapping, so the pages being remapped must already be allocated. Pages

By allocated you mean the populate_physmap hypercall must happen before
the grant operations are done?

(When I see allocated I think alloc_page, which I believe is _not_ what
you were saying).

> obtained from alloc_xenballooned_pages have valid GFNs not currently
> mapped to an MFN, so are available to be used in grant mappings.
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [PATCH 1/5] xen/netback: Use xenballooned pages for comms [ In reply to ]
On 10/24/2011 05:40 PM, Konrad Rzeszutek Wilk wrote:
>> (Will move to commit message). In PV guests, it is sufficient to only
>> reserve kernel address space for grant mappings because Xen modifies the
>> mappings directly. HVM guests require that Xen modify the GFN-to-MFN
>> mapping, so the pages being remapped must already be allocated. Pages
>
> By allocated you mean the populate_physmap hypercall must happen before
> the grant operations are done?
>
> (When I see allocated I think alloc_page, which I believe is _not_ what
> you were saying).

The pages must be valid kernel pages (with GFNs) which are actually obtained
with alloc_page if the balloon doesn't have any sitting around for us. They
must also *not* be populated in the physmap, which is why we grab them from
the balloon and not from alloc_page directly.

>
>> obtained from alloc_xenballooned_pages have valid GFNs not currently
>> mapped to an MFN, so are available to be used in grant mappings.
>>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [PATCH 1/5] xen/netback: Use xenballooned pages for comms [ In reply to ]
On Mon, Oct 24, 2011 at 05:47:40PM -0400, Daniel De Graaf wrote:
> On 10/24/2011 05:40 PM, Konrad Rzeszutek Wilk wrote:
> >> (Will move to commit message). In PV guests, it is sufficient to only
> >> reserve kernel address space for grant mappings because Xen modifies the
> >> mappings directly. HVM guests require that Xen modify the GFN-to-MFN
> >> mapping, so the pages being remapped must already be allocated. Pages
> >
> > By allocated you mean the populate_physmap hypercall must happen before
> > the grant operations are done?
> >
> > (When I see allocated I think alloc_page, which I believe is _not_ what
> > you were saying).
>
> The pages must be valid kernel pages (with GFNs) which are actually obtained
> with alloc_page if the balloon doesn't have any sitting around for us. They
> must also *not* be populated in the physmap, which is why we grab them from
> the balloon and not from alloc_page directly.

Uh, aren't pages from alloc_page ("if the balloon does not have any sitting around
for us") obtained from normal memory that is allocated at startup. And at startup
those swaths of memory are obtained by populate_physmap call?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [PATCH 1/5] xen/netback: Use xenballooned pages for comms [ In reply to ]
On 10/24/2011 06:08 PM, Konrad Rzeszutek Wilk wrote:
> On Mon, Oct 24, 2011 at 05:47:40PM -0400, Daniel De Graaf wrote:
>> On 10/24/2011 05:40 PM, Konrad Rzeszutek Wilk wrote:
>>>> (Will move to commit message). In PV guests, it is sufficient to only
>>>> reserve kernel address space for grant mappings because Xen modifies the
>>>> mappings directly. HVM guests require that Xen modify the GFN-to-MFN
>>>> mapping, so the pages being remapped must already be allocated. Pages
>>>
>>> By allocated you mean the populate_physmap hypercall must happen before
>>> the grant operations are done?
>>>
>>> (When I see allocated I think alloc_page, which I believe is _not_ what
>>> you were saying).
>>
>> The pages must be valid kernel pages (with GFNs) which are actually obtained
>> with alloc_page if the balloon doesn't have any sitting around for us. They
>> must also *not* be populated in the physmap, which is why we grab them from
>> the balloon and not from alloc_page directly.
>
> Uh, aren't pages from alloc_page ("if the balloon does not have any sitting around
> for us") obtained from normal memory that is allocated at startup. And at startup
> those swaths of memory are obtained by populate_physmap call?
>

Yes, but alloc_xenballooned_pages calls XENMEM_decrease_reservation to remove the
MFN mappings for these pages, so they are returned to the state where populate_physmap
has not been called on them.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel