The two components of the current revolution in software network control – SDN and NFV – are often mentioned together and merged in discussions, but they are very different animals. While most industry experts will agree that NFV will have its greatest impact at the edge of the network in most cases (see the multiple examples of downloaded DPI engines and firewalls shown at recent trade shows), it is often less clear that SDN will also have a large impact at the edge. Many of the examples shown on large screens in hotel ballrooms are of core bandwidth on demand applications (see “SDN is not (Just) Bandwith on Demand“), but the real revolution in SDN – as well as with NFV – will also begin at the edge of the network.

Network Functionality Virtualization (NFV) is the easier of the two innovations to understand. The concept of NFV is that functions that used to run in specialized hardware components will be moved into software components that can run just about anywhere in the network.

A prime example of a case study for NFV is a real world customer who operates a non-facilities-based service in which they turn up and manage enterprise access for businesses. This customer puts interface hardware at a customer site, then leases capacity back into the core network from a local provider. Their value add is in the services that they are able to offer on top of basic bandwidth connectivity (e.g. firewalls, deep packet inspection (DPI), virus management, security, etc.). In the traditional model, every time an enterprise called up to add a new feature, the customer would send someone to their site to install new hardware. With NFV, this customer would be able to push a few buttons and download the functionality directly into the existing hardware, saving time and money and showing greater responsiveness to their enterprise customers.

SDN is, at its core, a way to make the network more flexible and application aware. Applications operate generally at the edge of the network, while the core is concerned with moving bulk bits around efficiently. It makes sense that application awareness – and thus SDN – would be more useful at the edge. Some examples have been shown by companies like Sandvine, Allot, MRV, RAD, and others of just how SDN can optimize the edge of the network.

For example, at the edge of the network a customer may want to prioritize certain applications over other applications so that use of their scarce resource (bandwidth) is optimized. Alternatively, network providers may use SDN to help keep bandwidth at the edge of the network from being transported to the core of the network if its final destination is also at the edge. Both of these applications can be enabled by a network that is flexible, software-definable, and intelligent enough to recognize what the bits at the edge are being used for.

The biggest problem with the edge of the network and widespread deployment of SDN and NFV is scale. Any equipment deployed at the edge of the network is deployed in significantly larger numbers than equipment at the core of the network. Whereas the cost of additional processing power can be justified in core elements, any additional costs at the edge of the network are immediately multiplied by large numbers of elements.

In the original proposal for software defined networks, the concept was floated that a future network would be composed entirely of “white boxes” – equipment that has no inherent capabilities, but can be programmed to operate like any one of many different types of hardware. The basic component of a white box is a programmable high-speed processor. Unfortunately, purpose-built chips will always be less expensive and offer higher capabilities (e.g. faster packet processing speed) than generic high-speed processors, which is why most vendors are still not offering true “white boxes”. The few that have been demonstrated are significantly higher in price than standard network interface devices – often 10x more expensive or greater – which has meant that they can only be deployed in specific use cases.

In the example above of the customer who does not own their own facilities, they are willing to pay several thousand dollars for a network interface that is fully flexible. Even if the equipment is larger, uses more power, and costs much more to buy, the cost savings and service differentiation offered by such as system can be justified. For a large, facilities-based carrier (e.g. AT&T, FT, NTT, etc.), the cost of adding hundred or thousands of dollars to the cost of a customer interface is not tenable and white boxes at the edge will likely never make sense.

This tradeoff has led to a variety of interesting implementations in the customer premises equipment (CPE) market – from near-white-box approaches to all-ASIC approaches to modular systems and hybrids that choose some features from each side. For now there is not a clear winner in this race, which is good news for network operators who are offered an extensive menu of options to choose from.

Network operators should pick a business model for the edge of their network that makes sense, being sure to imagine a future world of flexible, software-defined networks, and look for a vendor that can meet their needs. There are too many options available at this point to be doing it the other way around.