Mailing List Archive

Fwd: Fwd: [Xen-users] Shared memory between Dom0 and DomU
Hello,

I'm still reading document about inter-VM communication via shared memory.
I found many reference about IVC.
Is it already present in Xen? How can i use them?

I really need some explanations .
Thanks!



-------- Message original --------
Sujet: Fwd: [Xen-users] Shared memory between Dom0 and DomU
Date : Wed, 09 Mar 2011 15:12:06 +0100
De : MAYAP christine <christine.mayapkamga@enseeiht.fr>
Pour : xen-devel@lists.xensource.com, xen-users@lists.xensource.com



Hello

While waiting for some help, i continued to google about this topic.
In fact, i want two distinct process (one on the Dom0 and the second one
on the DomU) to be able the read or write in a share memory.

Many post are talking about granting permissions or grant table.

I'm sorry if my question is a bit stupid. Where would i start to be able
the test those grant permission?
Should i need to write an external program or should i modify some Xen
source codes?

I really have no idea about where to start.
I'll sincerely appreciate any idea. Even one stating where to start.

Thanks in advance!

-------- Message original --------
Sujet: [Xen-users] Shared memory between Dom0 and DomU
Date : Wed, 09 Mar 2011 13:07:19 +0100
De : MAYAP christine <christine.mayapkamga@enseeiht.fr>
Pour : xen-devel@lists.xensource.com, xen-users@lists.xensource.com



HI,

I'm a newbie in using shared memory with Xen. I was usually unsing it
between process on a same Computer.

Please, can i have some usefull links abour sharing memoru betwen Dom0
and DomU first and between DomU.

I'm able to give more informations if needed!

Cheers!

--
MAYAP Christine
IRIT/ENSEEIHT
2 rue Charles Camichel - BP 7122
31071 Toulouse cedex 7


_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users
Re: Fwd: Fwd: [Xen-users] Shared memory between Dom0 and DomU [ In reply to ]
On Thu, Mar 10, 2011 at 11:03:25AM +0100, MAYAP christine wrote:
> Hello,
>
> I'm still reading document about inter-VM communication via shared memory.
> I found many reference about IVC.

The IVC on I think is the XCP source code. Google for XCP source code
and you should find it.

> Is it already present in Xen? How can i use them?

You could also look at gntalloc driver.
Look in git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git #master tree.


I think it pretty much does what you want. Here is test code that
you can use to communicate between domains using mmap-ed memory.


Courtesy of Daniel De Graaf.

#include <errno.h>
#include <fcntl.h>
#include <unistd.h>
#include <stdio.h>
#include <string.h>
#include <stdint.h>
#include <sys/ioctl.h>
#include <sys/mman.h>

struct ioctl_gntdev_grant_ref {
/* The domain ID of the grant to be mapped. */
uint32_t domid;
/* The grant reference of the grant to be mapped. */
uint32_t ref;
};

/*
* Allocates a new page and creates a new grant reference.
*/
#define IOCTL_GNTALLOC_ALLOC_GREF \
_IOC(_IOC_NONE, 'G', 5, sizeof(struct ioctl_gntalloc_alloc_gref))
struct ioctl_gntalloc_alloc_gref {
/* IN parameters */
/* The ID of the domain to be given access to the grants. */
uint16_t domid;
/* Flags for this mapping */
uint16_t flags;
/* Number of pages to map */
uint32_t count;
/* OUT parameters */
/* The offset to be used on a subsequent call to mmap(). */
uint64_t index;
/* The grant references of the newly created grant, one per page */
/* Variable size, depending on count */
uint32_t gref_ids[1];
};

#define GNTALLOC_FLAG_WRITABLE 1

/*
* Deallocates the grant reference, allowing the associated page to be freed if
* no other domains are using it.
*/
#define IOCTL_GNTALLOC_DEALLOC_GREF \
_IOC(_IOC_NONE, 'G', 6, sizeof(struct ioctl_gntalloc_dealloc_gref))
struct ioctl_gntalloc_dealloc_gref {
/* IN parameters */
/* The offset returned in the map operation */
uint64_t index;
/* Number of references to unmap */
uint32_t count;
};

#define IOCTL_GNTDEV_MAP_GRANT_REF \
_IOC(_IOC_NONE, 'G', 0, sizeof(struct ioctl_gntdev_map_grant_ref))
struct ioctl_gntdev_map_grant_ref {
/* IN parameters */
/* The number of grants to be mapped. */
uint32_t count;
uint32_t pad;
/* OUT parameters */
/* The offset to be used on a subsequent call to mmap(). */
uint64_t index;
/* Variable IN parameter. */
/* Array of grant references, of size @count. */
struct ioctl_gntdev_grant_ref refs[1];
};
#define GNTDEV_MAP_WRITABLE 0x1

#define IOCTL_GNTDEV_UNMAP_GRANT_REF \
_IOC(_IOC_NONE, 'G', 1, sizeof(struct ioctl_gntdev_unmap_grant_ref))
struct ioctl_gntdev_unmap_grant_ref {
/* IN parameters */
/* The offset was returned by the corresponding map operation. */
uint64_t index;
/* The number of pages to be unmapped. */
uint32_t count;
uint32_t pad;
};

/*
* Sets up an unmap notification within the page, so that the other side can do
* cleanup if this side crashes. Required to implement cross-domain robust
* mutexes or close notification on communication channels.
*
* Each mapped page only supports one notification; multiple calls referring to
* the same page overwrite the previous notification. You must clear the
* notification prior to the IOCTL_GNTALLOC_DEALLOC_GREF if you do not want it
* to occur.
*/
#define IOCTL_GNTDEV_SET_UNMAP_NOTIFY \
_IOC(_IOC_NONE, 'G', 7, sizeof(struct ioctl_gntdev_unmap_notify))
struct ioctl_gntdev_unmap_notify {
/* IN parameters */
/* Index of a byte in the page */
uint64_t index;
/* Action(s) to take on unmap */
uint32_t action;
/* Event channel to notify */
uint32_t event_channel_port;
};

/* Clear (set to zero) the byte specified by index */
#define UNMAP_NOTIFY_CLEAR_BYTE 0x1
/* Send an interrupt on the indicated event channel */
#define UNMAP_NOTIFY_SEND_EVENT 0x2

/*
* Sets up an unmap notification within the page, so that the other side can do
* cleanup if this side crashes. Required to implement cross-domain robust
* mutexes or close notification on communication channels.
*
* Each mapped page only supports one notification; multiple calls referring to
* the same page overwrite the previous notification. You must clear the
* notification prior to the IOCTL_GNTALLOC_DEALLOC_GREF if you do not want it
* to occur.
*/
#define IOCTL_GNTALLOC_SET_UNMAP_NOTIFY \
_IOC(_IOC_NONE, 'G', 7, sizeof(struct ioctl_gntalloc_unmap_notify))
struct ioctl_gntalloc_unmap_notify {
/* IN parameters */
/* Index of a byte in the page */
uint64_t index;
/* Action(s) to take on unmap */
uint32_t action;
/* Event channel to notify */
uint32_t event_channel_port;
};

/* Clear (set to zero) the byte specified by index */
#define UNMAP_NOTIFY_CLEAR_BYTE 0x1
/* Send an interrupt on the indicated event channel */
#define UNMAP_NOTIFY_SEND_EVENT 0x2

#ifndef offsetof
#define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER)
#endif


int a_fd;
int d_fd;

struct shr_page {
uint64_t id;
char buffer[64];
uint8_t notifies[8];
};

struct data {
struct shr_page* mem;
int handle;
} items[128];

void sa(int id)
{
struct ioctl_gntalloc_alloc_gref arg = {
.domid = id,
.flags = GNTALLOC_FLAG_WRITABLE,
.count = 1
};
int rv = ioctl(a_fd, IOCTL_GNTALLOC_ALLOC_GREF, &arg);
if (rv) {
printf("src-add error: %s (rv=%d)\n", strerror(errno), rv);
return;
}
int i=0;
while (items[i].mem) i++;
items[i].mem = mmap(0, 4096, PROT_READ|PROT_WRITE, MAP_SHARED, a_fd, arg.index);
if (items[i].mem == MAP_FAILED) {
items[i].mem = 0;
printf("mmap failed: SHOULD NOT HAPPEN\n");
return;
}
items[i].handle = arg.index;
printf("Created shared page with domain %d, grant #%d. Mapped locally at %d=%p\n",
id, arg.gref_ids[0], arg.index, items[i].mem);

items[i].mem->id = rand() | ((long)(getpid()) << 32);
items[i].mem->notifies[0] = 1;
struct ioctl_gntalloc_unmap_notify uarg = {
.index = arg.index + offsetof(struct shr_page, notifies[0]),
.action = UNMAP_NOTIFY_CLEAR_BYTE
};
rv = ioctl(a_fd, IOCTL_GNTALLOC_SET_UNMAP_NOTIFY, &uarg);
if (rv)
printf("gntalloc unmap notify error: %s (rv=%d)\n", strerror(errno), rv);
}

void sd(int ref) {
struct ioctl_gntalloc_dealloc_gref arg = {
.index = ref,
.count = 1
};

int rv = ioctl(a_fd, IOCTL_GNTALLOC_DEALLOC_GREF, &arg);
if (rv)
printf("src-del error: %s (rv=%d)\n", strerror(errno), rv);
else
printf("Stopped offering grant at offset %d\n", ref);
}

void mm(int domid, int refid) {
struct ioctl_gntdev_map_grant_ref arg = {
.count = 1,
.refs[0].domid = domid,
.refs[0].ref = refid,
};
int rv = ioctl(d_fd, IOCTL_GNTDEV_MAP_GRANT_REF, &arg);
if (rv) {
printf("Could not map grant %d.%d: %s (rv=%d)\n", domid, refid, strerror(errno), rv);
return;
}
int i=0,j=1;
while (items[i].mem) i++;
items[i].mem = mmap(0, 4096, PROT_READ|PROT_WRITE, MAP_SHARED, d_fd, arg.index);
if (items[i].mem == MAP_FAILED) {
items[i].mem = 0;
printf("Could not map grant %d.%d: %s (map failed)\n", domid, refid, strerror(errno), rv);
return;
}
items[i].handle = arg.index;
printf("Mapped grant %d.%d as %d=%p\n", domid, refid, arg.index, items[i].mem);

while (items[i].mem->notifies[j]) j++;
items[i].mem->notifies[j] = 1;
struct ioctl_gntalloc_unmap_notify uarg = {
.index = arg.index + offsetof(struct shr_page, notifies[j]),
.action = UNMAP_NOTIFY_CLEAR_BYTE
};
rv = ioctl(d_fd, IOCTL_GNTDEV_SET_UNMAP_NOTIFY, &uarg);
if (rv)
printf("gntdev unmap notify error: %s (rv=%d)\n", strerror(errno), rv);
}

void gu(int index) {
struct ioctl_gntdev_unmap_grant_ref arg = {
.index = index,
.count = 1,
};
int rv = ioctl(d_fd, IOCTL_GNTDEV_UNMAP_GRANT_REF, &arg);
if (rv)
printf("gu error: %s (rv=%d)\n", strerror(errno), rv);
else
printf("Unhooked mapped grant at offset %d\n", index);
}

void mu(void* addr) {
int i = 0;
munmap(addr, 4096);
while (i < 128)
{
if (items[i].mem == addr)
items[i].mem = 0;
i++;
}
printf("Unmapped page at %p\n", addr);
}

void show(char* word) {
int i;
int wlen = strlen(word);
for(i=0; i < 128; i++) {
if (!items[i].mem)
continue;
memmove(items[i].mem->buffer + wlen, items[i].mem->buffer, 63 - wlen);
memcpy(items[i].mem->buffer, word, wlen);
printf("%02d(%ld,%d): id %16lx n=%d%d%d%d%d%d%d%d b=%s\n",
i, items[i].mem, items[i].handle, items[i].mem->id,
items[i].mem->notifies[0], items[i].mem->notifies[1], items[i].mem->notifies[2], items[i].mem->notifies[3],
items[i].mem->notifies[4], items[i].mem->notifies[5], items[i].mem->notifies[6], items[i].mem->notifies[7],
items[i].mem->buffer);
}
printf("END\n");
}

int main(int argc, char** argv) {
a_fd = open("/dev/xen/gntalloc", O_RDWR);
d_fd = open("/dev/xen/gntdev", O_RDWR);
printf(
"add <domid> return gntref, address\n"
"map <domid> <ref> return index, address\n"
"adel <gntref> delete <add> internal\n"
"ddel <index> delete <map> internal\n"
"unmap <address> unmap memory\n"
"show show all pages\n"
"<word> append word to all mapped pages, show\n"
" PID %x\n", getpid()
);
while (1) {
char line[80];
char word[80];
long a, b;
printf("\n> ");
fflush(stdout);
fgets(line, 80, stdin);
sscanf(line, "%s %ld %ld", word, &a, &b);
if (!strcmp(word, "add")) {
sa(a);
} else if (!strcmp(word, "map")) {
mm(a, b);
} else if (!strcmp(word, "adel")) {
sd(a);
} else if (!strcmp(word, "ddel")) {
gu(a);
} else if (!strcmp(word, "unmap")) {
mu((void*)a);
} else if (!strcmp(word, "show")) {
show("");
} else {
show(word);
}
}
}
>
> I really need some explanations .
> Thanks!
>
>
>
> -------- Message original --------
> Sujet: Fwd: [Xen-users] Shared memory between Dom0 and DomU
> Date : Wed, 09 Mar 2011 15:12:06 +0100
> De : MAYAP christine <christine.mayapkamga@enseeiht.fr>
> Pour : xen-devel@lists.xensource.com, xen-users@lists.xensource.com
>
>
>
> Hello
>
> While waiting for some help, i continued to google about this topic.
> In fact, i want two distinct process (one on the Dom0 and the second
> one on the DomU) to be able the read or write in a share memory.
>
> Many post are talking about granting permissions or grant table.
>
> I'm sorry if my question is a bit stupid. Where would i start to be
> able the test those grant permission?
> Should i need to write an external program or should i modify some
> Xen source codes?
>
> I really have no idea about where to start.
> I'll sincerely appreciate any idea. Even one stating where to start.
>
> Thanks in advance!
>
> -------- Message original --------
> Sujet: [Xen-users] Shared memory between Dom0 and DomU
> Date : Wed, 09 Mar 2011 13:07:19 +0100
> De : MAYAP christine <christine.mayapkamga@enseeiht.fr>
> Pour : xen-devel@lists.xensource.com, xen-users@lists.xensource.com
>
>
>
> HI,
>
> I'm a newbie in using shared memory with Xen. I was usually unsing it
> between process on a same Computer.
>
> Please, can i have some usefull links abour sharing memoru betwen Dom0
> and DomU first and between DomU.
>
> I'm able to give more informations if needed!
>
> Cheers!
>
> --
> MAYAP Christine
> IRIT/ENSEEIHT
> 2 rue Charles Camichel - BP 7122
> 31071 Toulouse cedex 7
>
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@lists.xensource.com
> http://lists.xensource.com/xen-users
>

> _______________________________________________
> Xen-users mailing list
> Xen-users@lists.xensource.com
> http://lists.xensource.com/xen-users

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: Fwd: Fwd: [Xen-users] Shared memory between Dom0 and DomU [ In reply to ]
On Tue, Mar 15, 2011 at 11:17:14PM -0400, Konrad Rzeszutek Wilk wrote:
> On Thu, Mar 10, 2011 at 11:03:25AM +0100, MAYAP christine wrote:
> > Hello,
> >
> > I'm still reading document about inter-VM communication via shared memory.
> > I found many reference about IVC.
>
> The IVC on I think is the XCP source code. Google for XCP source code
> and you should find it.
>

There's also V4V in Citrix XenClient.. sources available in the source iso.
It includes xen patches, linux kernel patches, and user space libraries providing a socket-like API (for linux and windows).

-- Pasi

> > Is it already present in Xen? How can i use them?
>
> You could also look at gntalloc driver.
> Look in git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git #master tree.
>
>
> I think it pretty much does what you want. Here is test code that
> you can use to communicate between domains using mmap-ed memory.
>
>
> Courtesy of Daniel De Graaf.
>
> #include <errno.h>
> #include <fcntl.h>
> #include <unistd.h>
> #include <stdio.h>
> #include <string.h>
> #include <stdint.h>
> #include <sys/ioctl.h>
> #include <sys/mman.h>
>
> struct ioctl_gntdev_grant_ref {
> /* The domain ID of the grant to be mapped. */
> uint32_t domid;
> /* The grant reference of the grant to be mapped. */
> uint32_t ref;
> };
>
> /*
> * Allocates a new page and creates a new grant reference.
> */
> #define IOCTL_GNTALLOC_ALLOC_GREF \
> _IOC(_IOC_NONE, 'G', 5, sizeof(struct ioctl_gntalloc_alloc_gref))
> struct ioctl_gntalloc_alloc_gref {
> /* IN parameters */
> /* The ID of the domain to be given access to the grants. */
> uint16_t domid;
> /* Flags for this mapping */
> uint16_t flags;
> /* Number of pages to map */
> uint32_t count;
> /* OUT parameters */
> /* The offset to be used on a subsequent call to mmap(). */
> uint64_t index;
> /* The grant references of the newly created grant, one per page */
> /* Variable size, depending on count */
> uint32_t gref_ids[1];
> };
>
> #define GNTALLOC_FLAG_WRITABLE 1
>
> /*
> * Deallocates the grant reference, allowing the associated page to be freed if
> * no other domains are using it.
> */
> #define IOCTL_GNTALLOC_DEALLOC_GREF \
> _IOC(_IOC_NONE, 'G', 6, sizeof(struct ioctl_gntalloc_dealloc_gref))
> struct ioctl_gntalloc_dealloc_gref {
> /* IN parameters */
> /* The offset returned in the map operation */
> uint64_t index;
> /* Number of references to unmap */
> uint32_t count;
> };
>
> #define IOCTL_GNTDEV_MAP_GRANT_REF \
> _IOC(_IOC_NONE, 'G', 0, sizeof(struct ioctl_gntdev_map_grant_ref))
> struct ioctl_gntdev_map_grant_ref {
> /* IN parameters */
> /* The number of grants to be mapped. */
> uint32_t count;
> uint32_t pad;
> /* OUT parameters */
> /* The offset to be used on a subsequent call to mmap(). */
> uint64_t index;
> /* Variable IN parameter. */
> /* Array of grant references, of size @count. */
> struct ioctl_gntdev_grant_ref refs[1];
> };
> #define GNTDEV_MAP_WRITABLE 0x1
>
> #define IOCTL_GNTDEV_UNMAP_GRANT_REF \
> _IOC(_IOC_NONE, 'G', 1, sizeof(struct ioctl_gntdev_unmap_grant_ref))
> struct ioctl_gntdev_unmap_grant_ref {
> /* IN parameters */
> /* The offset was returned by the corresponding map operation. */
> uint64_t index;
> /* The number of pages to be unmapped. */
> uint32_t count;
> uint32_t pad;
> };
>
> /*
> * Sets up an unmap notification within the page, so that the other side can do
> * cleanup if this side crashes. Required to implement cross-domain robust
> * mutexes or close notification on communication channels.
> *
> * Each mapped page only supports one notification; multiple calls referring to
> * the same page overwrite the previous notification. You must clear the
> * notification prior to the IOCTL_GNTALLOC_DEALLOC_GREF if you do not want it
> * to occur.
> */
> #define IOCTL_GNTDEV_SET_UNMAP_NOTIFY \
> _IOC(_IOC_NONE, 'G', 7, sizeof(struct ioctl_gntdev_unmap_notify))
> struct ioctl_gntdev_unmap_notify {
> /* IN parameters */
> /* Index of a byte in the page */
> uint64_t index;
> /* Action(s) to take on unmap */
> uint32_t action;
> /* Event channel to notify */
> uint32_t event_channel_port;
> };
>
> /* Clear (set to zero) the byte specified by index */
> #define UNMAP_NOTIFY_CLEAR_BYTE 0x1
> /* Send an interrupt on the indicated event channel */
> #define UNMAP_NOTIFY_SEND_EVENT 0x2
>
> /*
> * Sets up an unmap notification within the page, so that the other side can do
> * cleanup if this side crashes. Required to implement cross-domain robust
> * mutexes or close notification on communication channels.
> *
> * Each mapped page only supports one notification; multiple calls referring to
> * the same page overwrite the previous notification. You must clear the
> * notification prior to the IOCTL_GNTALLOC_DEALLOC_GREF if you do not want it
> * to occur.
> */
> #define IOCTL_GNTALLOC_SET_UNMAP_NOTIFY \
> _IOC(_IOC_NONE, 'G', 7, sizeof(struct ioctl_gntalloc_unmap_notify))
> struct ioctl_gntalloc_unmap_notify {
> /* IN parameters */
> /* Index of a byte in the page */
> uint64_t index;
> /* Action(s) to take on unmap */
> uint32_t action;
> /* Event channel to notify */
> uint32_t event_channel_port;
> };
>
> /* Clear (set to zero) the byte specified by index */
> #define UNMAP_NOTIFY_CLEAR_BYTE 0x1
> /* Send an interrupt on the indicated event channel */
> #define UNMAP_NOTIFY_SEND_EVENT 0x2
>
> #ifndef offsetof
> #define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER)
> #endif
>
>
> int a_fd;
> int d_fd;
>
> struct shr_page {
> uint64_t id;
> char buffer[64];
> uint8_t notifies[8];
> };
>
> struct data {
> struct shr_page* mem;
> int handle;
> } items[128];
>
> void sa(int id)
> {
> struct ioctl_gntalloc_alloc_gref arg = {
> .domid = id,
> .flags = GNTALLOC_FLAG_WRITABLE,
> .count = 1
> };
> int rv = ioctl(a_fd, IOCTL_GNTALLOC_ALLOC_GREF, &arg);
> if (rv) {
> printf("src-add error: %s (rv=%d)\n", strerror(errno), rv);
> return;
> }
> int i=0;
> while (items[i].mem) i++;
> items[i].mem = mmap(0, 4096, PROT_READ|PROT_WRITE, MAP_SHARED, a_fd, arg.index);
> if (items[i].mem == MAP_FAILED) {
> items[i].mem = 0;
> printf("mmap failed: SHOULD NOT HAPPEN\n");
> return;
> }
> items[i].handle = arg.index;
> printf("Created shared page with domain %d, grant #%d. Mapped locally at %d=%p\n",
> id, arg.gref_ids[0], arg.index, items[i].mem);
>
> items[i].mem->id = rand() | ((long)(getpid()) << 32);
> items[i].mem->notifies[0] = 1;
> struct ioctl_gntalloc_unmap_notify uarg = {
> .index = arg.index + offsetof(struct shr_page, notifies[0]),
> .action = UNMAP_NOTIFY_CLEAR_BYTE
> };
> rv = ioctl(a_fd, IOCTL_GNTALLOC_SET_UNMAP_NOTIFY, &uarg);
> if (rv)
> printf("gntalloc unmap notify error: %s (rv=%d)\n", strerror(errno), rv);
> }
>
> void sd(int ref) {
> struct ioctl_gntalloc_dealloc_gref arg = {
> .index = ref,
> .count = 1
> };
>
> int rv = ioctl(a_fd, IOCTL_GNTALLOC_DEALLOC_GREF, &arg);
> if (rv)
> printf("src-del error: %s (rv=%d)\n", strerror(errno), rv);
> else
> printf("Stopped offering grant at offset %d\n", ref);
> }
>
> void mm(int domid, int refid) {
> struct ioctl_gntdev_map_grant_ref arg = {
> .count = 1,
> .refs[0].domid = domid,
> .refs[0].ref = refid,
> };
> int rv = ioctl(d_fd, IOCTL_GNTDEV_MAP_GRANT_REF, &arg);
> if (rv) {
> printf("Could not map grant %d.%d: %s (rv=%d)\n", domid, refid, strerror(errno), rv);
> return;
> }
> int i=0,j=1;
> while (items[i].mem) i++;
> items[i].mem = mmap(0, 4096, PROT_READ|PROT_WRITE, MAP_SHARED, d_fd, arg.index);
> if (items[i].mem == MAP_FAILED) {
> items[i].mem = 0;
> printf("Could not map grant %d.%d: %s (map failed)\n", domid, refid, strerror(errno), rv);
> return;
> }
> items[i].handle = arg.index;
> printf("Mapped grant %d.%d as %d=%p\n", domid, refid, arg.index, items[i].mem);
>
> while (items[i].mem->notifies[j]) j++;
> items[i].mem->notifies[j] = 1;
> struct ioctl_gntalloc_unmap_notify uarg = {
> .index = arg.index + offsetof(struct shr_page, notifies[j]),
> .action = UNMAP_NOTIFY_CLEAR_BYTE
> };
> rv = ioctl(d_fd, IOCTL_GNTDEV_SET_UNMAP_NOTIFY, &uarg);
> if (rv)
> printf("gntdev unmap notify error: %s (rv=%d)\n", strerror(errno), rv);
> }
>
> void gu(int index) {
> struct ioctl_gntdev_unmap_grant_ref arg = {
> .index = index,
> .count = 1,
> };
> int rv = ioctl(d_fd, IOCTL_GNTDEV_UNMAP_GRANT_REF, &arg);
> if (rv)
> printf("gu error: %s (rv=%d)\n", strerror(errno), rv);
> else
> printf("Unhooked mapped grant at offset %d\n", index);
> }
>
> void mu(void* addr) {
> int i = 0;
> munmap(addr, 4096);
> while (i < 128)
> {
> if (items[i].mem == addr)
> items[i].mem = 0;
> i++;
> }
> printf("Unmapped page at %p\n", addr);
> }
>
> void show(char* word) {
> int i;
> int wlen = strlen(word);
> for(i=0; i < 128; i++) {
> if (!items[i].mem)
> continue;
> memmove(items[i].mem->buffer + wlen, items[i].mem->buffer, 63 - wlen);
> memcpy(items[i].mem->buffer, word, wlen);
> printf("%02d(%ld,%d): id %16lx n=%d%d%d%d%d%d%d%d b=%s\n",
> i, items[i].mem, items[i].handle, items[i].mem->id,
> items[i].mem->notifies[0], items[i].mem->notifies[1], items[i].mem->notifies[2], items[i].mem->notifies[3],
> items[i].mem->notifies[4], items[i].mem->notifies[5], items[i].mem->notifies[6], items[i].mem->notifies[7],
> items[i].mem->buffer);
> }
> printf("END\n");
> }
>
> int main(int argc, char** argv) {
> a_fd = open("/dev/xen/gntalloc", O_RDWR);
> d_fd = open("/dev/xen/gntdev", O_RDWR);
> printf(
> "add <domid> return gntref, address\n"
> "map <domid> <ref> return index, address\n"
> "adel <gntref> delete <add> internal\n"
> "ddel <index> delete <map> internal\n"
> "unmap <address> unmap memory\n"
> "show show all pages\n"
> "<word> append word to all mapped pages, show\n"
> " PID %x\n", getpid()
> );
> while (1) {
> char line[80];
> char word[80];
> long a, b;
> printf("\n> ");
> fflush(stdout);
> fgets(line, 80, stdin);
> sscanf(line, "%s %ld %ld", word, &a, &b);
> if (!strcmp(word, "add")) {
> sa(a);
> } else if (!strcmp(word, "map")) {
> mm(a, b);
> } else if (!strcmp(word, "adel")) {
> sd(a);
> } else if (!strcmp(word, "ddel")) {
> gu(a);
> } else if (!strcmp(word, "unmap")) {
> mu((void*)a);
> } else if (!strcmp(word, "show")) {
> show("");
> } else {
> show(word);
> }
> }
> }
> >
> > I really need some explanations .
> > Thanks!
> >
> >
> >
> > -------- Message original --------
> > Sujet: Fwd: [Xen-users] Shared memory between Dom0 and DomU
> > Date : Wed, 09 Mar 2011 15:12:06 +0100
> > De : MAYAP christine <christine.mayapkamga@enseeiht.fr>
> > Pour : xen-devel@lists.xensource.com, xen-users@lists.xensource.com
> >
> >
> >
> > Hello
> >
> > While waiting for some help, i continued to google about this topic.
> > In fact, i want two distinct process (one on the Dom0 and the second
> > one on the DomU) to be able the read or write in a share memory.
> >
> > Many post are talking about granting permissions or grant table.
> >
> > I'm sorry if my question is a bit stupid. Where would i start to be
> > able the test those grant permission?
> > Should i need to write an external program or should i modify some
> > Xen source codes?
> >
> > I really have no idea about where to start.
> > I'll sincerely appreciate any idea. Even one stating where to start.
> >
> > Thanks in advance!
> >
> > -------- Message original --------
> > Sujet: [Xen-users] Shared memory between Dom0 and DomU
> > Date : Wed, 09 Mar 2011 13:07:19 +0100
> > De : MAYAP christine <christine.mayapkamga@enseeiht.fr>
> > Pour : xen-devel@lists.xensource.com, xen-users@lists.xensource.com
> >
> >
> >
> > HI,
> >
> > I'm a newbie in using shared memory with Xen. I was usually unsing it
> > between process on a same Computer.
> >
> > Please, can i have some usefull links abour sharing memoru betwen Dom0
> > and DomU first and between DomU.
> >
> > I'm able to give more informations if needed!
> >
> > Cheers!
> >
> > --
> > MAYAP Christine
> > IRIT/ENSEEIHT
> > 2 rue Charles Camichel - BP 7122
> > 31071 Toulouse cedex 7
> >
> >
> > _______________________________________________
> > Xen-users mailing list
> > Xen-users@lists.xensource.com
> > http://lists.xensource.com/xen-users
> >
>
> > _______________________________________________
> > Xen-users mailing list
> > Xen-users@lists.xensource.com
> > http://lists.xensource.com/xen-users
>
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xensource.com
> > http://lists.xensource.com/xen-devel
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: Fwd: Fwd: [Xen-users] Shared memory between Dom0 and DomU [ In reply to ]
On Wed, 2011-03-16 at 07:58 +0000, Pasi Kärkkäinen wrote:
> On Tue, Mar 15, 2011 at 11:17:14PM -0400, Konrad Rzeszutek Wilk wrote:
> > On Thu, Mar 10, 2011 at 11:03:25AM +0100, MAYAP christine wrote:
> > > Hello,
> > >
> > > I'm still reading document about inter-VM communication via shared memory.
> > > I found many reference about IVC.
> >
> > The IVC on I think is the XCP source code. Google for XCP source code
> > and you should find it.
> >
>
> There's also V4V in Citrix XenClient.. sources available in the source iso.
> It includes xen patches, linux kernel patches, and user space libraries providing a socket-like API (for linux and windows).

I wasn't aware of an IVC mechanisms in XCP so I suspect this is what
Konrad was thinking of.

Ian.

>
> -- Pasi
>
> > > Is it already present in Xen? How can i use them?
> >
> > You could also look at gntalloc driver.
> > Look in git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git #master tree.
> >
> >
> > I think it pretty much does what you want. Here is test code that
> > you can use to communicate between domains using mmap-ed memory.
> >
> >
> > Courtesy of Daniel De Graaf.
> >
> > #include <errno.h>
> > #include <fcntl.h>
> > #include <unistd.h>
> > #include <stdio.h>
> > #include <string.h>
> > #include <stdint.h>
> > #include <sys/ioctl.h>
> > #include <sys/mman.h>
> >
> > struct ioctl_gntdev_grant_ref {
> > /* The domain ID of the grant to be mapped. */
> > uint32_t domid;
> > /* The grant reference of the grant to be mapped. */
> > uint32_t ref;
> > };
> >
> > /*
> > * Allocates a new page and creates a new grant reference.
> > */
> > #define IOCTL_GNTALLOC_ALLOC_GREF \
> > _IOC(_IOC_NONE, 'G', 5, sizeof(struct ioctl_gntalloc_alloc_gref))
> > struct ioctl_gntalloc_alloc_gref {
> > /* IN parameters */
> > /* The ID of the domain to be given access to the grants. */
> > uint16_t domid;
> > /* Flags for this mapping */
> > uint16_t flags;
> > /* Number of pages to map */
> > uint32_t count;
> > /* OUT parameters */
> > /* The offset to be used on a subsequent call to mmap(). */
> > uint64_t index;
> > /* The grant references of the newly created grant, one per page */
> > /* Variable size, depending on count */
> > uint32_t gref_ids[1];
> > };
> >
> > #define GNTALLOC_FLAG_WRITABLE 1
> >
> > /*
> > * Deallocates the grant reference, allowing the associated page to be freed if
> > * no other domains are using it.
> > */
> > #define IOCTL_GNTALLOC_DEALLOC_GREF \
> > _IOC(_IOC_NONE, 'G', 6, sizeof(struct ioctl_gntalloc_dealloc_gref))
> > struct ioctl_gntalloc_dealloc_gref {
> > /* IN parameters */
> > /* The offset returned in the map operation */
> > uint64_t index;
> > /* Number of references to unmap */
> > uint32_t count;
> > };
> >
> > #define IOCTL_GNTDEV_MAP_GRANT_REF \
> > _IOC(_IOC_NONE, 'G', 0, sizeof(struct ioctl_gntdev_map_grant_ref))
> > struct ioctl_gntdev_map_grant_ref {
> > /* IN parameters */
> > /* The number of grants to be mapped. */
> > uint32_t count;
> > uint32_t pad;
> > /* OUT parameters */
> > /* The offset to be used on a subsequent call to mmap(). */
> > uint64_t index;
> > /* Variable IN parameter. */
> > /* Array of grant references, of size @count. */
> > struct ioctl_gntdev_grant_ref refs[1];
> > };
> > #define GNTDEV_MAP_WRITABLE 0x1
> >
> > #define IOCTL_GNTDEV_UNMAP_GRANT_REF \
> > _IOC(_IOC_NONE, 'G', 1, sizeof(struct ioctl_gntdev_unmap_grant_ref))
> > struct ioctl_gntdev_unmap_grant_ref {
> > /* IN parameters */
> > /* The offset was returned by the corresponding map operation. */
> > uint64_t index;
> > /* The number of pages to be unmapped. */
> > uint32_t count;
> > uint32_t pad;
> > };
> >
> > /*
> > * Sets up an unmap notification within the page, so that the other side can do
> > * cleanup if this side crashes. Required to implement cross-domain robust
> > * mutexes or close notification on communication channels.
> > *
> > * Each mapped page only supports one notification; multiple calls referring to
> > * the same page overwrite the previous notification. You must clear the
> > * notification prior to the IOCTL_GNTALLOC_DEALLOC_GREF if you do not want it
> > * to occur.
> > */
> > #define IOCTL_GNTDEV_SET_UNMAP_NOTIFY \
> > _IOC(_IOC_NONE, 'G', 7, sizeof(struct ioctl_gntdev_unmap_notify))
> > struct ioctl_gntdev_unmap_notify {
> > /* IN parameters */
> > /* Index of a byte in the page */
> > uint64_t index;
> > /* Action(s) to take on unmap */
> > uint32_t action;
> > /* Event channel to notify */
> > uint32_t event_channel_port;
> > };
> >
> > /* Clear (set to zero) the byte specified by index */
> > #define UNMAP_NOTIFY_CLEAR_BYTE 0x1
> > /* Send an interrupt on the indicated event channel */
> > #define UNMAP_NOTIFY_SEND_EVENT 0x2
> >
> > /*
> > * Sets up an unmap notification within the page, so that the other side can do
> > * cleanup if this side crashes. Required to implement cross-domain robust
> > * mutexes or close notification on communication channels.
> > *
> > * Each mapped page only supports one notification; multiple calls referring to
> > * the same page overwrite the previous notification. You must clear the
> > * notification prior to the IOCTL_GNTALLOC_DEALLOC_GREF if you do not want it
> > * to occur.
> > */
> > #define IOCTL_GNTALLOC_SET_UNMAP_NOTIFY \
> > _IOC(_IOC_NONE, 'G', 7, sizeof(struct ioctl_gntalloc_unmap_notify))
> > struct ioctl_gntalloc_unmap_notify {
> > /* IN parameters */
> > /* Index of a byte in the page */
> > uint64_t index;
> > /* Action(s) to take on unmap */
> > uint32_t action;
> > /* Event channel to notify */
> > uint32_t event_channel_port;
> > };
> >
> > /* Clear (set to zero) the byte specified by index */
> > #define UNMAP_NOTIFY_CLEAR_BYTE 0x1
> > /* Send an interrupt on the indicated event channel */
> > #define UNMAP_NOTIFY_SEND_EVENT 0x2
> >
> > #ifndef offsetof
> > #define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER)
> > #endif
> >
> >
> > int a_fd;
> > int d_fd;
> >
> > struct shr_page {
> > uint64_t id;
> > char buffer[64];
> > uint8_t notifies[8];
> > };
> >
> > struct data {
> > struct shr_page* mem;
> > int handle;
> > } items[128];
> >
> > void sa(int id)
> > {
> > struct ioctl_gntalloc_alloc_gref arg = {
> > .domid = id,
> > .flags = GNTALLOC_FLAG_WRITABLE,
> > .count = 1
> > };
> > int rv = ioctl(a_fd, IOCTL_GNTALLOC_ALLOC_GREF, &arg);
> > if (rv) {
> > printf("src-add error: %s (rv=%d)\n", strerror(errno), rv);
> > return;
> > }
> > int i=0;
> > while (items[i].mem) i++;
> > items[i].mem = mmap(0, 4096, PROT_READ|PROT_WRITE, MAP_SHARED, a_fd, arg.index);
> > if (items[i].mem == MAP_FAILED) {
> > items[i].mem = 0;
> > printf("mmap failed: SHOULD NOT HAPPEN\n");
> > return;
> > }
> > items[i].handle = arg.index;
> > printf("Created shared page with domain %d, grant #%d. Mapped locally at %d=%p\n",
> > id, arg.gref_ids[0], arg.index, items[i].mem);
> >
> > items[i].mem->id = rand() | ((long)(getpid()) << 32);
> > items[i].mem->notifies[0] = 1;
> > struct ioctl_gntalloc_unmap_notify uarg = {
> > .index = arg.index + offsetof(struct shr_page, notifies[0]),
> > .action = UNMAP_NOTIFY_CLEAR_BYTE
> > };
> > rv = ioctl(a_fd, IOCTL_GNTALLOC_SET_UNMAP_NOTIFY, &uarg);
> > if (rv)
> > printf("gntalloc unmap notify error: %s (rv=%d)\n", strerror(errno), rv);
> > }
> >
> > void sd(int ref) {
> > struct ioctl_gntalloc_dealloc_gref arg = {
> > .index = ref,
> > .count = 1
> > };
> >
> > int rv = ioctl(a_fd, IOCTL_GNTALLOC_DEALLOC_GREF, &arg);
> > if (rv)
> > printf("src-del error: %s (rv=%d)\n", strerror(errno), rv);
> > else
> > printf("Stopped offering grant at offset %d\n", ref);
> > }
> >
> > void mm(int domid, int refid) {
> > struct ioctl_gntdev_map_grant_ref arg = {
> > .count = 1,
> > .refs[0].domid = domid,
> > .refs[0].ref = refid,
> > };
> > int rv = ioctl(d_fd, IOCTL_GNTDEV_MAP_GRANT_REF, &arg);
> > if (rv) {
> > printf("Could not map grant %d.%d: %s (rv=%d)\n", domid, refid, strerror(errno), rv);
> > return;
> > }
> > int i=0,j=1;
> > while (items[i].mem) i++;
> > items[i].mem = mmap(0, 4096, PROT_READ|PROT_WRITE, MAP_SHARED, d_fd, arg.index);
> > if (items[i].mem == MAP_FAILED) {
> > items[i].mem = 0;
> > printf("Could not map grant %d.%d: %s (map failed)\n", domid, refid, strerror(errno), rv);
> > return;
> > }
> > items[i].handle = arg.index;
> > printf("Mapped grant %d.%d as %d=%p\n", domid, refid, arg.index, items[i].mem);
> >
> > while (items[i].mem->notifies[j]) j++;
> > items[i].mem->notifies[j] = 1;
> > struct ioctl_gntalloc_unmap_notify uarg = {
> > .index = arg.index + offsetof(struct shr_page, notifies[j]),
> > .action = UNMAP_NOTIFY_CLEAR_BYTE
> > };
> > rv = ioctl(d_fd, IOCTL_GNTDEV_SET_UNMAP_NOTIFY, &uarg);
> > if (rv)
> > printf("gntdev unmap notify error: %s (rv=%d)\n", strerror(errno), rv);
> > }
> >
> > void gu(int index) {
> > struct ioctl_gntdev_unmap_grant_ref arg = {
> > .index = index,
> > .count = 1,
> > };
> > int rv = ioctl(d_fd, IOCTL_GNTDEV_UNMAP_GRANT_REF, &arg);
> > if (rv)
> > printf("gu error: %s (rv=%d)\n", strerror(errno), rv);
> > else
> > printf("Unhooked mapped grant at offset %d\n", index);
> > }
> >
> > void mu(void* addr) {
> > int i = 0;
> > munmap(addr, 4096);
> > while (i < 128)
> > {
> > if (items[i].mem == addr)
> > items[i].mem = 0;
> > i++;
> > }
> > printf("Unmapped page at %p\n", addr);
> > }
> >
> > void show(char* word) {
> > int i;
> > int wlen = strlen(word);
> > for(i=0; i < 128; i++) {
> > if (!items[i].mem)
> > continue;
> > memmove(items[i].mem->buffer + wlen, items[i].mem->buffer, 63 - wlen);
> > memcpy(items[i].mem->buffer, word, wlen);
> > printf("%02d(%ld,%d): id %16lx n=%d%d%d%d%d%d%d%d b=%s\n",
> > i, items[i].mem, items[i].handle, items[i].mem->id,
> > items[i].mem->notifies[0], items[i].mem->notifies[1], items[i].mem->notifies[2], items[i].mem->notifies[3],
> > items[i].mem->notifies[4], items[i].mem->notifies[5], items[i].mem->notifies[6], items[i].mem->notifies[7],
> > items[i].mem->buffer);
> > }
> > printf("END\n");
> > }
> >
> > int main(int argc, char** argv) {
> > a_fd = open("/dev/xen/gntalloc", O_RDWR);
> > d_fd = open("/dev/xen/gntdev", O_RDWR);
> > printf(
> > "add <domid> return gntref, address\n"
> > "map <domid> <ref> return index, address\n"
> > "adel <gntref> delete <add> internal\n"
> > "ddel <index> delete <map> internal\n"
> > "unmap <address> unmap memory\n"
> > "show show all pages\n"
> > "<word> append word to all mapped pages, show\n"
> > " PID %x\n", getpid()
> > );
> > while (1) {
> > char line[80];
> > char word[80];
> > long a, b;
> > printf("\n> ");
> > fflush(stdout);
> > fgets(line, 80, stdin);
> > sscanf(line, "%s %ld %ld", word, &a, &b);
> > if (!strcmp(word, "add")) {
> > sa(a);
> > } else if (!strcmp(word, "map")) {
> > mm(a, b);
> > } else if (!strcmp(word, "adel")) {
> > sd(a);
> > } else if (!strcmp(word, "ddel")) {
> > gu(a);
> > } else if (!strcmp(word, "unmap")) {
> > mu((void*)a);
> > } else if (!strcmp(word, "show")) {
> > show("");
> > } else {
> > show(word);
> > }
> > }
> > }
> > >
> > > I really need some explanations .
> > > Thanks!
> > >
> > >
> > >
> > > -------- Message original --------
> > > Sujet: Fwd: [Xen-users] Shared memory between Dom0 and DomU
> > > Date : Wed, 09 Mar 2011 15:12:06 +0100
> > > De : MAYAP christine <christine.mayapkamga@enseeiht.fr>
> > > Pour : xen-devel@lists.xensource.com, xen-users@lists.xensource.com
> > >
> > >
> > >
> > > Hello
> > >
> > > While waiting for some help, i continued to google about this topic.
> > > In fact, i want two distinct process (one on the Dom0 and the second
> > > one on the DomU) to be able the read or write in a share memory.
> > >
> > > Many post are talking about granting permissions or grant table.
> > >
> > > I'm sorry if my question is a bit stupid. Where would i start to be
> > > able the test those grant permission?
> > > Should i need to write an external program or should i modify some
> > > Xen source codes?
> > >
> > > I really have no idea about where to start.
> > > I'll sincerely appreciate any idea. Even one stating where to start.
> > >
> > > Thanks in advance!
> > >
> > > -------- Message original --------
> > > Sujet: [Xen-users] Shared memory between Dom0 and DomU
> > > Date : Wed, 09 Mar 2011 13:07:19 +0100
> > > De : MAYAP christine <christine.mayapkamga@enseeiht.fr>
> > > Pour : xen-devel@lists.xensource.com, xen-users@lists.xensource.com
> > >
> > >
> > >
> > > HI,
> > >
> > > I'm a newbie in using shared memory with Xen. I was usually unsing it
> > > between process on a same Computer.
> > >
> > > Please, can i have some usefull links abour sharing memoru betwen Dom0
> > > and DomU first and between DomU.
> > >
> > > I'm able to give more informations if needed!
> > >
> > > Cheers!
> > >
> > > --
> > > MAYAP Christine
> > > IRIT/ENSEEIHT
> > > 2 rue Charles Camichel - BP 7122
> > > 31071 Toulouse cedex 7
> > >
> > >
> > > _______________________________________________
> > > Xen-users mailing list
> > > Xen-users@lists.xensource.com
> > > http://lists.xensource.com/xen-users
> > >
> >
> > > _______________________________________________
> > > Xen-users mailing list
> > > Xen-users@lists.xensource.com
> > > http://lists.xensource.com/xen-users
> >
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xensource.com
> > > http://lists.xensource.com/xen-devel
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xensource.com
> > http://lists.xensource.com/xen-devel
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: Fwd: Fwd: [Xen-users] Shared memory between Dom0 and DomU [ In reply to ]
On Wed, Mar 16, 2011 at 08:03:02AM +0000, Ian Campbell wrote:
> On Wed, 2011-03-16 at 07:58 +0000, Pasi Kärkkäinen wrote:
> > On Tue, Mar 15, 2011 at 11:17:14PM -0400, Konrad Rzeszutek Wilk wrote:
> > > On Thu, Mar 10, 2011 at 11:03:25AM +0100, MAYAP christine wrote:
> > > > Hello,
> > > >
> > > > I'm still reading document about inter-VM communication via shared memory.
> > > > I found many reference about IVC.
> > >
> > > The IVC on I think is the XCP source code. Google for XCP source code
> > > and you should find it.
> > >
> >
> > There's also V4V in Citrix XenClient.. sources available in the source iso.
> > It includes xen patches, linux kernel patches, and user space libraries providing a socket-like API (for linux and windows).
>
> I wasn't aware of an IVC mechanisms in XCP so I suspect this is what
> Konrad was thinking of.

That was it. V4V.. sounds similar enough to IVC. Three letter acronym.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel