Mailing List Archive

vMX and SR-IOV, VFP xml dump
Hi folks,

Wondering if anyone would share their xml dump from VFP -the interface
config for a working SR-IOV setup please. (not sure if vCPU related info is
needed as well, probably if cpu pinning is used)

The juniper script is useless in my opinion as I'd like to understand the
xml settings.

adam

_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: vMX and SR-IOV, VFP xml dump [ In reply to ]
Hi,

On 19/08/2019 6:59 pm, adamv0025@netconsultings.com wrote:
> Wondering if anyone would share their xml dump from VFP -the interface
> config for a working SR-IOV setup please. (not sure if vCPU related info is
> needed as well, probably if cpu pinning is used)

Sure. This is with 4 x Intel X710 NIC's passed through with SR-IOV.

vmx.conf: https://pastebin.com/raw/eFrXT9au
Resulting virsh XML: https://pastebin.com/raw/zK2xnHUW
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: vMX and SR-IOV, VFP xml dump [ In reply to ]
> Chris
> Sent: Tuesday, August 20, 2019 3:05 AM
>
> Hi,
>
> On 19/08/2019 6:59 pm, adamv0025@netconsultings.com wrote:
> > Wondering if anyone would share their xml dump from VFP -the interface
> > config for a working SR-IOV setup please. (not sure if vCPU related
> > info is needed as well, probably if cpu pinning is used)
>
> Sure. This is with 4 x Intel X710 NIC's passed through with SR-IOV.
>
> vmx.conf: https://pastebin.com/raw/eFrXT9au Resulting virsh XML:
> https://pastebin.com/raw/zK2xnHUW
>
Thank you, much appreciated.
Out of curiosity what latency you get when pinging through the vMX please?

adam

_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: vMX and SR-IOV, VFP xml dump [ In reply to ]
Hi

On 21/08/2019 3:32 pm, adamv0025@netconsultings.com wrote:
> Thank you, much appreciated.
> Out of curiosity what latency you get when pinging through the vMX please?

It's less than 1/10th of a millisecond (while routing roughly 3gbit of
traffic and this via a GRE tunnel running over IPSEC terminated on the
vMX), I haven't done more testing to get exact figures though as this is
good enough for my needs.

I am actually curious though, why not use the vmx.sh script to
start/stop it? I don't think JTAC will support more than basic
troubleshooting with that configuration but I could be wrong.

The only thing that annoys me slightly with vmx.sh is the management
interface on the host that gets used for OOB on the vFP/vRE loses it's
IPv6 address when the IP's are moved to the bridge interface it creates.
It's not a big deal as for the host management I use a different
interface anyway and IPv6 continues to work fine.

If you are doing a new deployment I strongly recommend you jump to
19.1R1 or higher. The reason for this is the Juniper supplied drivers
for i40e (and ixgbe) are no longer required (actually they are
deprecated). All releases before 19.1R1 I have had constant issues with
the vFP crashing and the closest to a fix I got was a software package
that would restart the vFPC automatically. When the crash occured it
would show in the hosts kernel log file that a PF reset has occured.
This happened across multiple Ubuntu and CentOS releases. After
deploying 19.1R1 with the latest Intel supplied i40e and iavf
(replacement for i40evf) drivers it has been stable for me.

Since deploying 19.1R1, on startup I create the VF's and mark them as
trusted instead of letting the vmx.sh script handle it. Happy to supply
the startup script I made if its helpful.

_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: vMX and SR-IOV, VFP xml dump [ In reply to ]
> From: Chris <lists+j-nsp@gbe0.com>
> Sent: Thursday, August 22, 2019 6:44 AM
>
> Hi
>
> On 21/08/2019 3:32 pm, adamv0025@netconsultings.com wrote:
> > Thank you, much appreciated.
> > Out of curiosity what latency you get when pinging through the vMX
> please?
>
> It's less than 1/10th of a millisecond (while routing roughly 3gbit of traffic and
> this via a GRE tunnel running over IPSEC terminated on the vMX), I haven't
> done more testing to get exact figures though as this is good enough for my
> needs.
>
For some reason mine is acting as if there's some kind of throttling or pps rate performance issue.
This is pinging not to the vMX but rather through the vMX so only VFP is at play.

ping 192.0.2.6 source 192.0.2.2 interval 1
PING 192.0.2.6 (192.0.2.6) from 192.0.2.2: 56 data bytes
64 bytes from 192.0.2.6: icmp_seq=0 ttl=253 time=1.021 ms
64 bytes from 192.0.2.6: icmp_seq=1 ttl=253 time=0.861 ms
64 bytes from 192.0.2.6: icmp_seq=2 ttl=253 time=0.83 ms
64 bytes from 192.0.2.6: icmp_seq=3 ttl=253 time=0.85 ms
64 bytes from 192.0.2.6: icmp_seq=4 ttl=253 time=1.115 ms

ping 192.0.2.6 source 192.0.2.2
PING 192.0.2.6 (192.0.2.6) from 192.0.2.2: 56 data bytes
64 bytes from 192.0.2.6: icmp_seq=0 ttl=253 time=1.202 ms
64 bytes from 192.0.2.6: icmp_seq=1 ttl=253 time=7.988 ms
64 bytes from 192.0.2.6: icmp_seq=2 ttl=253 time=7.968 ms
64 bytes from 192.0.2.6: icmp_seq=3 ttl=253 time=8.047 ms
64 bytes from 192.0.2.6: icmp_seq=4 ttl=253 time=7.918 ms


> I am actually curious though, why not use the vmx.sh script to start/stop it? I
> don't think JTAC will support more than basic troubleshooting with that
> configuration but I could be wrong.
>
Unfortunately I have to say that so far the JTAC support has been useless.
My biggest problem with the vmx.sh is that it does a lot of stuff behind the scenes which are not documented anywhere.
It would be much better if the documentation explained how the information in the .conf file translates into actions or settings.


>
> If you are doing a new deployment I strongly recommend you jump to
> 19.1R1 or higher. The reason for this is the Juniper supplied drivers for i40e
> (and ixgbe) are no longer required (actually they are deprecated). All
> releases before 19.1R1 I have had constant issues with the vFP crashing and
> the closest to a fix I got was a software package that would restart the vFPC
> automatically. When the crash occured it would show in the hosts kernel log
> file that a PF reset has occured.
> This happened across multiple Ubuntu and CentOS releases. After deploying
> 19.1R1 with the latest Intel supplied i40e and iavf (replacement for i40evf)
> drivers it has been stable for me.
>
Hmm good to know but yes using 19.2R1 currently for testing (has support for 40G interfaces)

> Since deploying 19.1R1, on startup I create the VF's and mark them as trusted
> instead of letting the vmx.sh script handle it. Happy to supply the startup
> script I made if its helpful.
>
Yes please if you could share the .conf file that would be great.

adam

_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: vMX and SR-IOV, VFP xml dump [ In reply to ]
https://github.com/pinggit/vmx-tutorial

does a pretty good job of explaining what is happening in the background

Simon.


On Thu, 22 Aug 2019 at 18:34, <adamv0025@netconsultings.com> wrote:

> > From: Chris <lists+j-nsp@gbe0.com>
> > Sent: Thursday, August 22, 2019 6:44 AM
> >
> > Hi
> >
> > On 21/08/2019 3:32 pm, adamv0025@netconsultings.com wrote:
> > > Thank you, much appreciated.
> > > Out of curiosity what latency you get when pinging through the vMX
> > please?
> >
> > It's less than 1/10th of a millisecond (while routing roughly 3gbit of
> traffic and
> > this via a GRE tunnel running over IPSEC terminated on the vMX), I
> haven't
> > done more testing to get exact figures though as this is good enough for
> my
> > needs.
> >
> For some reason mine is acting as if there's some kind of throttling or
> pps rate performance issue.
> This is pinging not to the vMX but rather through the vMX so only VFP is
> at play.
>
> ping 192.0.2.6 source 192.0.2.2 interval 1
> PING 192.0.2.6 (192.0.2.6) from 192.0.2.2: 56 data bytes
> 64 bytes from 192.0.2.6: icmp_seq=0 ttl=253 time=1.021 ms
> 64 bytes from 192.0.2.6: icmp_seq=1 ttl=253 time=0.861 ms
> 64 bytes from 192.0.2.6: icmp_seq=2 ttl=253 time=0.83 ms
> 64 bytes from 192.0.2.6: icmp_seq=3 ttl=253 time=0.85 ms
> 64 bytes from 192.0.2.6: icmp_seq=4 ttl=253 time=1.115 ms
>
> ping 192.0.2.6 source 192.0.2.2
> PING 192.0.2.6 (192.0.2.6) from 192.0.2.2: 56 data bytes
> 64 bytes from 192.0.2.6: icmp_seq=0 ttl=253 time=1.202 ms
> 64 bytes from 192.0.2.6: icmp_seq=1 ttl=253 time=7.988 ms
> 64 bytes from 192.0.2.6: icmp_seq=2 ttl=253 time=7.968 ms
> 64 bytes from 192.0.2.6: icmp_seq=3 ttl=253 time=8.047 ms
> 64 bytes from 192.0.2.6: icmp_seq=4 ttl=253 time=7.918 ms
>
>
> > I am actually curious though, why not use the vmx.sh script to
> start/stop it? I
> > don't think JTAC will support more than basic troubleshooting with that
> > configuration but I could be wrong.
> >
> Unfortunately I have to say that so far the JTAC support has been
> useless.
> My biggest problem with the vmx.sh is that it does a lot of stuff behind
> the scenes which are not documented anywhere.
> It would be much better if the documentation explained how the information
> in the .conf file translates into actions or settings.
>
>
> >
> > If you are doing a new deployment I strongly recommend you jump to
> > 19.1R1 or higher. The reason for this is the Juniper supplied drivers
> for i40e
> > (and ixgbe) are no longer required (actually they are deprecated). All
> > releases before 19.1R1 I have had constant issues with the vFP crashing
> and
> > the closest to a fix I got was a software package that would restart the
> vFPC
> > automatically. When the crash occured it would show in the hosts kernel
> log
> > file that a PF reset has occured.
> > This happened across multiple Ubuntu and CentOS releases. After deploying
> > 19.1R1 with the latest Intel supplied i40e and iavf (replacement for
> i40evf)
> > drivers it has been stable for me.
> >
> Hmm good to know but yes using 19.2R1 currently for testing (has support
> for 40G interfaces)
>
> > Since deploying 19.1R1, on startup I create the VF's and mark them as
> trusted
> > instead of letting the vmx.sh script handle it. Happy to supply the
> startup
> > script I made if its helpful.
> >
> Yes please if you could share the .conf file that would be great.
>
> adam
>
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>


--

Dicko.
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: vMX and SR-IOV, VFP xml dump [ In reply to ]
Hi,

On 22/08/2019 6:34 pm, adamv0025@netconsultings.com wrote:
> For some reason mine is acting as if there's some kind of throttling or pps rate performance issue.
> This is pinging not to the vMX but rather through the vMX so only VFP is at play.

Interesting, how much traffic are you talking about in PPS? For me, it
varies between 100k PPS in (and the same amount out) to 550k.

Are you using hyperthreading?

Is there anything throttling the CPU at the OS level or BIOS level? Eg.
for Dell servers the performance profile should be selected.

Assuming its an Intel CPU have you tried disabling the various
vulnerability mitigations?

> Unfortunately I have to say that so far the JTAC support has been useless.

Agreed, the only good support I have had for the vMX has been with the
developers directly.

> My biggest problem with the vmx.sh is that it does a lot of stuff behind the scenes which are not documented anywhere.
> It would be much better if the documentation explained how the information in the .conf file translates into actions or settings.

Fair enough, for me it isn't a problem but I understand where you are
coming from.

> Hmm good to know but yes using 19.2R1 currently for testing (has support for 40G interfaces)

I have not got around to testing this release yet, it would be good if
you can share any success/failure.

> Yes please if you could share the .conf file that would be great.

The config file was the same as I provided earlier. The startup script
is available here:

https://pastebin.com/raw/TyNCP4Jv

The script makes a few assumptions that may not apply to your environment:

* The installation path to the vMX (I extract the vMX tar.gz to
/home/vMX-<RELEASE>). I then create a symlink for /home/vMX to the
release that I am using.

* All interfaces I have defined are SR-IOV and there is a single VF.

* The host OS is CentOS.

* I hand off the rest of the startup to vmx.sh, the only reason I use
this script is to make sure that the NIC setup is correct.

You can probably make some improvements to the script (on my list of
things to do is make this apply to a wider range of setups) but I have
not been able to find the time as of yet.

Thanks
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp