What is SDN?
SDN, or Software Defined Networking, was originally a term used to describe the centralisation of network control, a concept that became known as the Openflow protocol. Rather than using switches to determine the best path for applications to take, based on their narrow view of the network, software-defined networking gives control of the application traffic flow to a central controller. With global network visibility, this controller can determine the best path for the application, independent of network topology.
This protocol runs on the physical underlay network and is therefore still bound by the constraints of the hardware. But, as cloud, mobility and big data continue to place new demands on the network, we have also seen a move towards NFV and server virtualisation, which has been the primary driver for SDN in recent years.
As a result, SDN has evolved and now commonly refers to a concept known as overlay networking. Tunnels, or protocols such as MPLS, are used to build a virtual network via which traffic can be sent. The underlying network becomes abstracted from the applications and services running on top of it thereby overcoming many of the normal constraints. Services can be delivered to wherever they are needed in the network, independent of the physical topology below. Protocols such as VXLAN, NVGRE and Ethernet VPN are used to achieve this and popular controllers include systems such as VMware/NSX and Contrail.
So what are the benefits of SDN?
By automating, what can often be, very manual processes, software-defined networking allows fast and flexible provisioning of network connectivity between devices. Services can be delivered wherever they are needed in the network without ever worrying about the lack of flexibility in the network below.
What’s more, should one server fail, we can introduce automatic failover. A new server is immediately provisioned and traffic is instantly rerouted, creating flexibility and resilience in the network.
We also gain greater agility, which translates to greater productivity. New services can be tested and migrated to production environments without the need for any physical changes. Companies can achieve competitive advantage through the speed with which they are now able to provision applications and deliver them to market.
With a universal mechanism for communicating across the top of a wide range of devices, scale can be achieved without any loss of productivity. It’s important to remember, however, that underlay infrastructures must be strong enough and have enough bandwidth to cope with the new demands placed upon them.
Cloud providers were quick to latch on to these benefits, but more and more we are seeing an uptake of the technology in private data centres, something which has been aided by the launch of VMWare’s NSX.
Virtualisation – A Wake-Up Call For Your Security PolicyRead More
Paul Bonner, Group Head of Technology at HardwareSolutions, looks at how the increasing popularity of virtualisation means that standard security policies may no longer be effective.