Bits of Info


July 9, 2016
Author: Jeremiah Plaskett

Cisco Expressway Collaboration Edge Network Interface Options

July 9, 2016
Author: Jeremiah Plaskett

In a previous Collaboration blog a couple of years ago, we here at Byteworks documented some of the wonderful ups and downs we ran into during the early days of deploying Cisco’s Collaboration Edge architecture.  You can read about  all the fun we had here, but first a little refresher on what the Edge architecture does:  If you are a company that currently uses Cisco Unified Communications Manager, Unity Connection and the rest of the on-premise IP Telephony system, (aka Business Edition 6000 or 7000) and you want to extend that architecture to remote, VPN-less clients whether they are using a Cisco softphone like Cisco Jabber, or a physical Cisco IP desk phone (not all models supported, 7800 and 8800 are primary desk models supported), Cisco’s “Expressway Edge” product is what you seek.  It consists of 2 Virtual Machines (VMs), one called the “Core” and one called the “Edge”.  These VMs are typically built out on the same UCS system that your other Cisco UC Application VMs are built on (CUCM, CUC, CIMP,etc).  Here is a nice illustration that shows the overall topology:

SRND-DiagIn addition to “Mobile and Remote” access of IP Telephony endpoints, the solution also supports Business to Business calling, (Example: 2 Different Companies that deploy Cisco IPT with Edge Architecture can dial each other via SIP URI e.g. jdoe@acme.com, and even get video) “Collaboration Meeting Rooms” (CMR) integration, which is a “Virtual Meeting Room” with local or cloud-based bridging and also Cisco Jabber Guest (ability to send Instant Messaging/chat between Consumer and Businesses).  The architecture also supports dialing other H.323 IP systems for legacy video systems support.

All that now said, the rest of this blog will focus specifically on the “Edge” component of the solution, more specifically all of the deployment options for the network interfaces of the Edge system itself.  The Expressway Edge has 2 options, a single NIC or dual NIC design.  Within each of these are even more choices, and some are not as secure as others.   The exact method an Engineer chooses will be determined by how secure you want it to be, how easy (administration-wise) that you desire, and a bunch of other potential gotchas that you sometimes don’t even think about until you are well down deployment drive.  After having been through numerous deployments, we have decided on one we like best.  Now, let’s go through some of these and we will share some of our insights on the differences between each method and why we prefer one over the others…

First…The single NIC approach… keep it simple, right?  Put an External Public IP on it, and then just let it communicate directly to the “Core” Expressway.  We’ve run into environments that are set up this way.  We consider it less secure, so we avoid this option.

What about the single NIC, but with DMZ?  Yes, you can go single NIC, place LAN 1 in the DMZ, it would be 1-to-1 NATed to the Public IP.  In this model, you have to make sure that the inside and outside flows in each direction are allowed through.  The inside flows would be between LAN1 in the DMZ to the “Core” on the Voice Server VLAN, the outside flows would be between clients on the outside and the external NAT address.  When deploying Expressway clusters, this is not our recommended method.  The configurations required to be able to use a single interface that does a 1-to-1 NAT to both the inside and the outside, is not your everyday configuration and is not something most firewall administrators that we have run into know how to do.  We should also add that this approach is dependent on NAT reflection (also referred to as hairpin NAT) which isn’t supported by all firewalls and can be more complicated to configure and support.

In addition, in this model we’ve seen a lot of issues with getting the NAT working correctly and playing nice with cluster communications between geographically redundant sites.  One major complication is that clustering communications (which used to use IPSEC, but as of code 8.8 now uses TLS, nice!) cannot be NATted (funny, spellcheck tells me NATted is not a word, but it is in our world).  So, for example in a single NIC deployment, you have to have communications between the Expressway C and E NATted, and you want the traffic to the outside to be NATted…but communication between the Expressway E cluster should be excluded from NAT.  You have to make specific exceptions to the NATting for these scenarios, otherwise traffic can get accidentally NATted, and causes havoc, not realizing it is talking to the same device.  All of this, adds to reasons to stay away from this deployment model where possible, it is simply more cumbersome and administratively painful.

The dual NIC option is the best approach, with a twist however.  We recommend putting both of the interfaces on the DMZ, having dedicated IP’s for the inside and outside.  You can have 2 separate DMZ’s (requires some static routing), or theoretically you can split the network in half, basically tricking it, because you can only have 1 default gateway.  (For example, you would have subnet mask of 255.255.255.128 for both, and they have same default gw.. Full disclosure: We have not yet tested this method but have read that this works, some static routing may still be required)  By going this route, you ensure that both interfaces are behind a firewall.  Inside and outside traffic flows through the firewall.  It is markedly safer than putting one interface on the outside (public) and one inside (same VLAN as voice servers), which we have seen a lot, yet less complex than single NIC deployments.  Dual NIC simplifies because everything on the inside uses the DMZ IP with no NATting, the only NATting would be connectivity to the WAN.  This method relieves issues with NATting complications and trying to isolate inside versus outside firewall rules.  It ultimately allows for simplification of the firewall rules and the deployment.

Note:  Rob Patmore, UC Engineer at Byteworks co-authored this blog.