My First OpenDaylight

Over the last few days I’ve started to the play with the OpenDaylight Test VM Image. This image is was easy to get up and running and have a playground with mininet and a pre-baked OpenDaylight (ODL) controller to play with. After deploying the OVA file in Virtualbox poking around the file system I got down to “business” with getting a test topology in place. I made some changes to initial mininet configuration startup file to make the topology more complex and changing the startup command to look like the following,

sudo mn --controller 'remote,ip=127.0.0.1,port=6633' --topo tree,3

This yielded a 8 hosts and 7 switches topology. At one point I have 63 hosts and some number of switches things broke pretty hard so I dialed it back a little bit. I want over to the webui for the controller and after some fiddling Names and Tiers on the switches. My test topology in the ODL console is show in the following screenshot.

ODL Home

I also had full reachability from all of the mininet hosts.

mininet> pingall
*** Ping: testing ping reachability
h1 > h2 h3 h4 h5 h6 h7 h8
h2 > h1 h3 h4 h5 h6 h7 h8
h3 > h1 h2 h4 h5 h6 h7 h8
h4 > h1 h2 h3 h5 h6 h7 h8
h5 > h1 h2 h3 h4 h6 h7 h8
h6 > h1 h2 h3 h4 h5 h7 h8
h7 > h1 h2 h3 h4 h5 h6 h8
h8 > h1 h2 h3 h4 h5 h6 h7
*** Results: 0% dropped (56/56 received)

Now that I had things working it was time to find ways to break it. Diving into the flow rules I threw together a basic Drop rule on one of the transit links.

Flow Rule Split Network

As expected the network was split into two.

mininet> pingall
*** Ping: testing ping reachability
h1 > h2 h3 h4 X X X X
h2 > h1 h3 h4 X X X X
h3 > h1 h2 h4 X X X X
h4 > h1 h2 h3 X X X X
h5 > X X X X h6 h7 h8
h6 > X X X X h5 h7 h8
h7 > X X X X h5 h6 h8
h8 > X X X X h5 h6 h7
*** Results: 57% dropped (24/56 received)

Lets see about black holing a single host now.

Drop H1 This drops all traffic from the host connected to port 1 on the switch which happens to be h1

mininet> pingall
*** Ping: testing ping reachability
h1 > X X X X X X X
h2 > X h3 h4 h5 h6 h7 h8
h3 > X h2 h4 h5 h6 h7 h8
h4 > X h2 h3 h5 h6 h7 h8
h5 > X h2 h3 h4 h6 h7 h8
h6 > X h2 h3 h4 h5 h7 h8
h7 > X h2 h3 h4 h5 h6 h8
h8 > X h2 h3 h4 h5 h6 h7
*** Results: 25% dropped (42/56 received)

OpenDaylight has always peaked my interested, I’ve been trying to follow the mailing lists and some of the discussions out there and the Test VM is a nice way to start to get under the hood. I have a lot more to learn and there are a ton of other plugins to start to explore. Not to mention to start to think about the API and writing some code against it.

Notes

  1. If you do not set switch roles properly end hosts my not show up on the topology.

  2. Flow rule names can not have spaces in them.

  3. The controller had the Access switches properly classified in the Tier however the transit switches were not set to either Distribution or Core.

Cisco Live 2013 – My (late) wrap up.

This past month I attended Cisco Live in Orlando, FL with 20,000(?) of my fellow Network/Collaboration/Service Provider/Data Center engineers from all around the world. This was my first time attending, and I had a blast! There are a few themes I won’t talk much about in this post that were big topics at Cisco Live one of which is the Internet of Everything (IoE) as that is well covered, and well is really just Market-ecture-tastic. New gear like the Catalyst 6800 or Nexus 7700 and new ASICs all of which are neat, powerful, and that will enable a lot of the future technologies, but Better, Faster, Stronger hardware comes all the time. In the end SDN/Network Virtualization for me was the most discussed topic in through all of the Network-centric sessions and “hallway” conversations throughout the entire week.

CLUS 2013 Schedule I was able to attended many great sessions, but even with a packed schedule I still wanted to be in two places at once most of the time. There were a few standout sessions including “BRKRST-3114 The Art of Network Architecture” and “BRKRST-3045 LISP – A Next Generation Networking Architecture”. “The Art of Network Architecture” was a very business forward discussion of network architecture, and I believe attempted to change the discussion around designing a network. Wheras “LISP – A Next Generation Networking Architecture” got me excited about LISP in a way that I had not been before. All the previous information I had read about LISP left me wanting for a tangible use case. This presentation at CLUS started to describe some good use cases for LISP, I am still left wanting for more wide spread production implementations.

CLUS Tweetup

Another great event I attended was the Tweetup organized by Tom Hollingsworth. I met a lot of people I follow on Twitter there, and it was nice to put a face with a Twitter handle and have some good conversations about networking and well just about anything else.

 

When listening to the discussions and presentations a few trends and themes struck me. First, there is a trend towards the flat network; when I look at the fabric technologies or the affinity networking coming out of Plexxi or potentially Insieme, this all puts a large exclamation point at the end of the need to move to IPv6 or at least implement dual stack sooner rather than later. It will be key in the success of these technologies in the data center. Next there was a constant argument going on about the death of the CLI and that the GUI will reign supreme. I believe both the CLI users and the GUI user can be accommodated, both types of interfaces can be used to manipulate some back end software and logic. An example of this is tail-f NCS which has both, while not “SDN” by some definitions, but an example of the 2 UIs co-existing. The real augment that needs to be had concerns the designs of the system needed to support the applications.

This one is more a rant and less of a theme, but I still think Cisco is missing the mark with the ASA 1000v. I think virtualized physical appliances are a transitional technology, but a needed one. Creating the ASA 1000v and not giving it the full set of features of it’s physical counterpart without a roadmap as far as I can tell to add them, along with the insane licensing scheme of a per-socket protected model does not make sense to me. This is all short changing the IaaS provider market and IMHO it should be licensed and operated similar to the CSR 1000v, full features and per appliance licensing.

Overall, I was left with two general questions from the week. First, I’m curious how the balance of systemic complexity vs configuration complexity vs structure complexity will fall as the overlay, “underlay” and the SDN glue that holds it all together sets into place. Each new technology that is introduced seems to address one of these complexity problems but not all three in one fell swoop, but this is a larger topic for another post. Second is a reoccurring theme in technology: everything old is new again; I look at the data center technologies, and some of the new IP routing technologies (LISP), and they look a lot like old telephony switching technologies, in the same way VDI looks like mainframe dumb terminals. This is not a critique, just an observation on how it’s important to know your past because it will come back Better, Faster, Stronger, or maybe just the same with a new box around it.