Mailing List Archive

RE: RE: Rather slow time of Ping in Windows with GPLPVdriver
>
> I did post a patch ages ago. It was deemed a bit too hacky. I think it would
> probably be better to re-examine the way Windows PV drivers are handling
> interrupts. It would be much nicer if we could properly bind event channels
> across all our vCPUs; we may be able to leverage what Stefano did for Linux
> PV-on-HVM.
>

What would also be nice is to have multiple interrupts attached to the platform pci driver, and bind events to a specific interrupt, and be able to control the affinity of each interrupt.

Another idea would be that each xenbus device hotplugs a new pci device with an interrupt. That only works for OS's that support hotplug pci though...

MSI interrupts might be another way of conveying event channel information as part of the interrupt, but I don't know enough about how MSI works to know if that is possible. I believe you still need one irq per 'message id' so your back to my first wish item.

Under Windows, if we set the affinity of the platform pci irq to cpu0 will that do the job (bind the irq to cpu0), or are there inefficiencies in doing that?

James


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
RE: RE: Rather slow time of Ping in Windows with GPLPVdriver [ In reply to ]
Nope, limiting the affinity mask before your IoConnectInterrupt(Ex) will work just fine, although you do risk Windows not giving you an interrupt if it decides for some reason that it's out of vectors on CPU0. Pretty small risk though, given that it's shareable :-)

Paul

> -----Original Message-----
> From: James Harper [mailto:james.harper@bendigoit.com.au]
> Sent: 13 March 2011 23:44
> To: Paul Durrant; Pasi Kärkkäinen
> Cc: MaoXiaoyun; xen devel
> Subject: RE: [Xen-devel] RE: Rather slow time of Ping in Windows
> with GPLPVdriver
>
> >
> > I did post a patch ages ago. It was deemed a bit too hacky. I
> think it
> > would probably be better to re-examine the way Windows PV drivers
> are
> > handling interrupts. It would be much nicer if we could properly
> bind
> > event channels across all our vCPUs; we may be able to leverage
> what
> > Stefano did for Linux PV-on-HVM.
> >
>
> What would also be nice is to have multiple interrupts attached to
> the platform pci driver, and bind events to a specific interrupt,
> and be able to control the affinity of each interrupt.
>
> Another idea would be that each xenbus device hotplugs a new pci
> device with an interrupt. That only works for OS's that support
> hotplug pci though...
>
> MSI interrupts might be another way of conveying event channel
> information as part of the interrupt, but I don't know enough about
> how MSI works to know if that is possible. I believe you still need
> one irq per 'message id' so your back to my first wish item.
>
> Under Windows, if we set the affinity of the platform pci irq to
> cpu0 will that do the job (bind the irq to cpu0), or are there
> inefficiencies in doing that?
>
> James


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel