Mailing List Archive

Frontier Tampa issues
Anyone else seeing weird things on Tampa/Bradenton FIOS connections?

I've got three unrelated customers that cant establishes IPsec back to me.

And a third that can't process credit cards out to their third party
merchant.

Customers are in 47.196.0.0/14.

In All instances, I see the traffic leave the CPE behind the FIOS circuit.
The IPSEC traffic never makes it to my DC. And no clue on the credit card
traffic. But it goes un-ack'd

And just now a fifth has appeared that can't query DNS against 8.8.8.8.
Responses go out and never come back.

The first four all started around noon today.
Re: Frontier Tampa issues [ In reply to ]
Yes, exactly same issue for us, and it has happened in the past a few years ago fortunately. Any chance the route takes a Level 3 (3356) path? I’m just theorizing here, but my belief is they have some kind of link aggregation in the path from TB to 3356 (or maybe just internal near some edge) and some traffic is getting hashed onto a problematic link/interface/linecard, etc. where IPSec gets dropped. One of our locations lost IPSec ability to some normal VPN endpoints but not others. And here’s why I think this is the issue…. if you change the source and/or destination IP address by one, you may find some or all of your sessions magically work again.

In our case, one of our office locations has a static assignment of (fortunately) five IP’s. We only have one external exposed, four site to site VPN’s. Two began failing Saturday morning. I moved the office firewall’s external IP minus 1 and that fixed both, but broke one that had been fine. On the remote end fortunately I have equipment that’s able to override the local IP for VPN traffic, so without impacting other things it talks to, I was able to add a new IP one off from the previous, and use that for traffic just to this office location; that fixed the remaining issue.

If I’d not seen this previously several years ago, and wasted who knows how many hours trying to figure it out, it would have once again taken forever to resolve. Trying to get through their support layer to someone who can really help is impossible. The support is really complete garbage at this point after the Verizon dump; I was going to say service, but that’s been stable outside of these random weird issues that are impossible to resolve with support.

I tried to be a nice guy and raise this through the support channels, but could not make it past the layer where they want me to take our office down to have someone plug a laptop in with our normal WAN IP and “prove” ipsec isn’t working with different equipment. I was like dude I just told you what I did to get it working again, offered packet captures, just escalate it, but ultimately gave up and hung up.

David

From: NANOG <nanog-bounces+dhubbard=dino.hostasaurus.com@nanog.org> on behalf of Nick Olsen <nick@141networks.com>
Date: Sunday, January 24, 2021 at 8:42 PM
To: "nanog@nanog.org" <nanog@nanog.org>
Subject: Frontier Tampa issues

Anyone else seeing weird things on Tampa/Bradenton FIOS connections?

I've got three unrelated customers that cant establishes IPsec back to me.

And a third that can't process credit cards out to their third party merchant.

Customers are in 47.196.0.0/14<http://47.196.0.0/14>.

In All instances, I see the traffic leave the CPE behind the FIOS circuit. The IPSEC traffic never makes it to my DC. And no clue on the credit card traffic. But it goes un-ack'd

And just now a fifth has appeared that can't query DNS against 8.8.8.8. Responses go out and never come back.

The first four all started around noon today.
Re: Frontier Tampa issues [ In reply to ]
Nail on the head, David.

I've got ~3 dozen IPSEC connections that land back at our DC. Only a few
were affected. Some physically near each other...etc No rhyme or reason to
the selection. So LAG hashing makes perfect sense. Due to the moderator
queue, my message was delayed. It started around Noon Saturday. As of this
morning (monday), It's resolved. I did spend about an hour trying to get a
ticket in. And did, but never heard back. Took me about a half-hour to
convince the rep I didn't want someone dispatched.

Yes, In this case the AS path was 5650>6939>Me. However, The return is
isn't accepting any routes from 5650 on that session. After talking to HE
about it, They claim it's intentionally filtered due to capacity issues on
the port, Waiting for 5650 to augment. Sounds like the classic peering
staring contest to me. It was nice for a while. And kept us out of trouble
during the last "nationwide" level 3 outage. As all other paths crossed
3356.

Given the outbound route from the Frontier circuit didn't touch level 3 (in
my case). And that's where my loss was occurring (What left the CPE, never
made it to my 6939 port). It doesn't look like this was specific to a LAG
between 5650 and 3356 though.

On Sun, Jan 24, 2021 at 9:19 PM David Hubbard <dhubbard@dino.hostasaurus.com>
wrote:

> Yes, exactly same issue for us, and it has happened in the past a few
> years ago fortunately. Any chance the route takes a Level 3 (3356) path?
> I’m just theorizing here, but my belief is they have some kind of link
> aggregation in the path from TB to 3356 (or maybe just internal near some
> edge) and some traffic is getting hashed onto a problematic
> link/interface/linecard, etc. where IPSec gets dropped. One of our
> locations lost IPSec ability to some normal VPN endpoints but not others.
> And here’s why I think this is the issue…. if you change the source and/or
> destination IP address by one, you may find some or all of your sessions
> magically work again.
>
>
>
> In our case, one of our office locations has a static assignment of
> (fortunately) five IP’s. We only have one external exposed, four site to
> site VPN’s. Two began failing Saturday morning. I moved the office
> firewall’s external IP minus 1 and that fixed both, but broke one that had
> been fine. On the remote end fortunately I have equipment that’s able to
> override the local IP for VPN traffic, so without impacting other things it
> talks to, I was able to add a new IP one off from the previous, and use
> that for traffic just to this office location; that fixed the remaining
> issue.
>
>
>
> If I’d not seen this previously several years ago, and wasted who knows
> how many hours trying to figure it out, it would have once again taken
> forever to resolve. Trying to get through their support layer to someone
> who can really help is impossible. The support is really complete garbage
> at this point after the Verizon dump; I was going to say service, but
> that’s been stable outside of these random weird issues that are impossible
> to resolve with support.
>
>
>
> I tried to be a nice guy and raise this through the support channels, but
> could not make it past the layer where they want me to take our office down
> to have someone plug a laptop in with our normal WAN IP and “prove” ipsec
> isn’t working with different equipment. I was like dude I just told you
> what I did to get it working again, offered packet captures, just escalate
> it, but ultimately gave up and hung up.
>
>
>
> David
>
>
>
> *From: *NANOG <nanog-bounces+dhubbard=dino.hostasaurus.com@nanog.org> on
> behalf of Nick Olsen <nick@141networks.com>
> *Date: *Sunday, January 24, 2021 at 8:42 PM
> *To: *"nanog@nanog.org" <nanog@nanog.org>
> *Subject: *Frontier Tampa issues
>
>
>
> Anyone else seeing weird things on Tampa/Bradenton FIOS connections?
>
>
>
> I've got three unrelated customers that cant establishes IPsec back to me.
>
>
>
> And a third that can't process credit cards out to their third party
> merchant.
>
>
>
> Customers are in 47.196.0.0/14.
>
>
>
> In All instances, I see the traffic leave the CPE behind the FIOS circuit.
> The IPSEC traffic never makes it to my DC. And no clue on the credit card
> traffic. But it goes un-ack'd
>
>
>
> And just now a fifth has appeared that can't query DNS against 8.8.8.8.
> Responses go out and never come back.
>
>
>
> The first four all started around noon today.
>