SDN is not Open Flow – the misnomer is that people consider it to be the same. So let’s start from the basic – understanding the Open Flow.
Basic Concept of Network communication
Networking is done on four planes –
- The Data Plane – or the user plane. Or the plane on which the data packets travel. Packet forwarding, packetization, reassembly, multicasting – all happen here. Consider this as the customers for an organisation who generate the demand for their products.
- The Control Plane – This part is not there with the user, but added by the system in deciding from where the user data ends, like deciding our the route the packet will take, priority of the packet etc. Basically a signalling plane for controlling the data packets generated by the users. In the company ideology, take this as the executive level that interface with the customer and get their product delivered.
- The Management Plane – This is to take call on the control plane. When the network administrator, wants to know the data flow, the data usage, and such other management parameters, he get those data from the router or the switches using specific management commands. This plane is used on SoS basis, as and when it is required. In the company analogy, take this as the managers, who are required in the company for management purposes not for execution of a job.
- The Service Plane – This is basically the security plane. Whether it is enabled or disabled, depends upon the administrator, depends upon the size and vulnerability of the data. The devices working here include UTMs, firewalls, SSL VPN boxes etc. In the company analogy, take these as the Board of the directors, sitting in for a meeting, as and when required.
Now, in a normal conventional hardware hardwire networking concept, the CPU in the hardware takes care of the control logic and take care of all the activities in the control plane, like deciding the path, working on the routing table finding out the shortest path, QoS, rate limiting, VLANs etc., which are the slower activities on control plane and are done at a slower pace. On the other hand the packet forwarding and L2 functions are done on the data plane by the data logic, which is done at wire-speed or the speed of the back plane.
In the Open Flow, these two logics – the data logic and the control logic are separated. The control logic is centralized and is linked to the flow of data. The Data logic or the data plane remains in the switches, and the control logic & the control plane is moved to an external controller. Thus OpenFlow is the basic protocol for sending and receiving the forwarding rules from the controller to the switches. The communication is done in a secured manner using SSL and the technology is called TLS – Transport level security, so that the communication between the controller & the switching (forwarding element / device) is completely secured.
It’s something similar to what we had done with storage with the concept of SAN in which the localized storage was moved to a centralized storage wherein it was made safe, secure, replicated, mirrored and highly available. At the same time reducing the cost of the server, by reducing the storage cost. Thus in OpenFlow, the switches, which would be required in bulk would relatively go cheaper and the controller which would be highly secure, safe, redundant with NoSPoF architecture would be the expensive component.
A familiar concept, that most of the people would be able to relate too is the controller based Wi-Fi network, in which the controller takes care of management, policies and security and the Access Points (APs), just get configured through the controller. All APs are automatically discovered.
If you really want to experience the functions & the functionalities of OpenFlow, try searching for Open vSwitch. This is a software virtual switch, running on open source Nicira Concept. This can run as a stand alone or distributed architecture. Visit http://openvswitch.org/ for more. Download and make your own OpenFlow software switch.
So that was OpenFlow v1.0 formulated in December 2009. The newer version for OpenFlow was OpenFlow v1.1, February 2011, in which the MPLS capabilities were added to the OpenFlow v1.0 to make it more compatible to the service provider networks. In this, Table chaining, group tables etc., kind of features were introduced, in order to make OpenFlow friendly to the bigger networks wherein the forwarding tables and the rules would increase many folds. Thus it started supporting Multi-path, MPLS, Q-in-Q and tunnels via virtual ports etc.
The next upgrade was rather done much faster by December 2011. OpenFlow v1.2 in which, the IPv6 support was introduced. Other features added were, ICMPv6, IPv6 neighborhood discovery, IPv6 flow tables, (TLV) Type-Length-Value structure for extensible matches and experimenter extensions through dedicated fields etc. And then in April 2012 came OpenFlow v1.3 in which, IPv6 was further perfected, MPLS was perfected till bottom of the stack bit matching, MAC-in-MAC encapsulation was introduced, capability negotiation for backward compatibility devices supporting just OpenFlow v1.0, cookies, duration, meters and so on.
In October 2013, the OpenFlow v1.4 was proposed, which, came with optical ports, extended experimenter extensions and certain improvement in type-length & value encodings. We skipped OpenFlow v1.3.1 & v1.3.2, as they were simple bug fixes, and no upgrades.
Issues in OpenFlow
- The biggest flaw in the entire system of OpenFlow is the version upgrades. The problem is, OpenFlow v1.0 hardware cannot be upgraded to v1.2 or 1.3 or 1.4, which the most strange part of it.
- For each flow, the forwarding device / the switch has to check 40+ matching fields, before forwarding a packet. Thus the flow level programming would take a lot of time for large network.
- For each forwarding device, before it forwards a packet, it need to search multiple table and instruction & action for each table are different, again it’ll slower down the communication.
- In conventional switching the control standard is northbound API, which is quite simple and standardized, but in OpenFlow the entire control is southbound.
- Moreover there is no standard API for overlapping domains between multiple controllers.
- As the entire control plane is controller based, the controller itself becomes a point of failure, as the entire system would collapse if the controller fails. This is yet to be addressed.
- On conventional switching, other packet formats other than Ethernet can also be transported in encapsulation like Modbus encapsulated over IP, but OpenFlow is yet to develop that as well.
- For data encryption of a non-encrypted inflow, OpenFlow is yet to find out an answer. Example of this can be taken as a firewall / UTM / IDS / IPS on switching level, which has been a norm in the conventional switching.
For academic purposes, learning & practice, following Open Flow controllers are available for free download – NOX, POX, ONOS, Open VirteX, Mininet for emulation.
Chronological references of OpenFlow development
History of OpenFlow & SDN –
- 2006, Martin Casado, a PhD from Stanford proposes a concept of decentralized security in an architecture, which he called as SANE.
- 2008, OpenFlow paper is published based on development of the same concept not just limited to security, but to all aspects of network, in ACM SIGCOMM CCR.
- 2009, Stanford publishes OpenFlow v1.0 specifications.
- 2009, Martin Casado lays foundation for Nicira as a co-founder.
- 2010, Guido Appenzeller from Stanford forms Big Switch Networks.
- 2011, Open Flow foundation is formed.
- 2011, 1st OpenFlow Networking Summit, in which Cisco & Juniper announces their plan to integrate it into their product line. Martin Casado coined the term SDN in this Summit.
- 2012, VM Ware buys out Nicira for $1.26 billion.
- 2013, Cisco buys Insieme for $ 838 million.
OpenFlow version updates chronology –
- OpenFlow v1.0 Dec 2009
- OpenFlow v1.1 Feb 2011
- OpenFlow v1.2 Dec 2011
- OpenFlow v1.3 Apr 2012
- OpenFlow v1.3.1 Jun 2012
- OpenFlow v1.3.2 Sep 2012
- OpenFlow v1 Oct 2013