It is still early days in the emergence of software defined networking, so there aren't many users around to share their experiences and expectations, but there are a few. Network World's editor in chief tracked down Steve Wallace, executive director of InCNTRE, Indiana University's Indiana Center for Network Translational Research and Education, which is already using the technology in a production environment. The school is also playing a role in the tech's evolution.
NW: How did IU get started in SDN?
WALLACE: Indiana University supports some of the world's most advanced network infrastructure, such as the Internet2 100Gbps per wave 15,000 mile fiber backbone, through its Global Network Operations Center, or GlobalNOC. So it was natural to be involved at the early phases of something called GENI. GENI is an NSF-funded project to support research on outside-of-the-box network technologies and approaches. IU has been successful at applying for and receiving GENI awards that allow us to accelerate our exploration of SDN and OpenFlow.
IU also maintains good relationships with vendors that cater to large campuses and service providers, the same vendors that began to show an early interest in SDN and OpenFlow. We added that all up and saw a need for vendors to be able to conduct interoperability testing of SDN products and, with sponsorship from some of the vendors, developed what we call the SDN Interoperability Lab. It's a membership-based consortium of companies like IBM, Cisco, HP ... there are about a dozen members. They pay an annual fee and we provide a facility where they can test their products' interoperability with other SDN products.
Indiana University is also using SDN in its networks. For example, Internet2 recently received a $60 million grant to build out a new nationwide fiber optic network. They've leased 15,000 miles of fiber, purchased state-of-the-art optronics to light the fiber, and the Layer 2 equipment that sits on top of that is composed of Brocade and Juniper switches that are under the control of OpenFlow. Indiana University developed the controller software that provisions services over that network using OpenFlow.
So that's a production network that spans the country, and the provisioning is done solely through the use of an OpenFlow application that was developed at Indiana University.
How about in your campus net?
Indiana University has more than 10Gbps of Internet capacity. Our security group needs to examine that traffic for threats. But intrusion detection systems typically don't run at those speeds so they need to take that 10 gigabits of traffic and divide it into smaller, coherent chunks and forward them to individual IDS systems. You don't want to split a conversation, so you need to make sure each chunk being directed to an individual IDS is a complete conversation.
So they needed an intelligent load balancer to dynamically balance traffic among the IDS systems. There are devices you can purchase to do that, but they tend to cost in the $100,000-$200,000 price range. We saw this was an obvious use case for SDN and OpenFlow, so we hired a couple of grad students to develop software for an OpenFlow controller that instructs a $40,000 OpenFlow-enabled switch to do the load balancing. This is something we use in our production network.
Will you ultimately use SDN in your broader IU campus environment?
Let me outline that environment first. We have eight campuses, two of which are very large. The largest is Bloomington, which has a student population of about 30,000, 10,000 of which live on campus, and we have a network that provides students, faculty and staff high-speed access to the Internet and resources on campus. We have roughly 400 buildings in Bloomington and our own fiber plant interconnecting those buildings, and a large wireless deployment with close to 5,000 access points.
What a lot of people don't realize is that higher education has many of the same security and compliance requirements as commercial organizations, plus many other issues to deal with. For examples we have a health clinic on campus so we have information that is covered by HIPAA (Health Insurance Portability and Accountability Act). We have student data and there are federal regulations -- predominately one called FERPA (the Family Educational Rights and Privacy Act) -- that requires we provide a certain level of protection for the student data. And then we take credit cards at the bookstore and other places and need to support credit card industry security standards. So we have a mix of things that require a level of security and accountability that would be equivalent to many commercial organizations.
On top of that we have bring your own device on a scale that few other organizations see. For example, every semester we have to accommodate tens of thousands of new devices. Students typically show up with a phone, a tablet or laptop, one or two game machines, maybe a television set with an Ethernet jack. So it's a huge number of bring your own devices.
We have automated systems in place that register new devices, that ensure the users operating the devices agree to certain terms, acknowledge their responsibilities for using the network, ensure their machines are relatively secure, and ensure we can provide integrity for all the data we're responsible for.
In addition, at universities it's common for the faculty, the staff and the students to have full system administration rights to their devices. So, unlike a company which might have an IT organization that centrally manages the desktop, universities typically don't behave that way. It's typical that a faculty member will get a research grant and part of the funding in that grant allows them to get a particular laptop to support that research and they order whatever they want and they are the system administrator on that laptop. So the campus has to provide a sort of security manifold that sits over those kinds of uses while preserving the integrity of the network.
And then, of course, there is the data center network, where we have systems in a virtual machine environment that provide for business process automation, things like registration and accounting, and then on the other side there are machines dedicated to supporting computational-intensive research.
So we have all of those things and SDN can play a role in many parts of the environment. In virtualized parts of the data center, for example, the university provides what is essentially a multi-tenant environment, so different departments might have VMs running applications that are specific to that department. So the university has to support multi-tenant isolation, it has to enforce the appropriate policies and it has to be efficient. People need to be able to grow and shrink their resource consumption as required, build out their applications, that kind of thing. So when you look at where SDN is really taking off now, that's a natural sweet spot. You start to look at network as a service as it's presented in OpenStack, start to get this environment that makes it easy to provide the multi-tenant isolation, provide for VM migration and do this using SDN technology.
How will you get from there from here? Will it be finding niche applications for SDN like the IDS example you described?
Yes, I think we'll do more of that. For example, another niche use case is instructors that would like to show the contents of their iPad on the screen in a classroom. They do this at home using Apple TV and they expect, reasonably so, they should be able to do this on their high-tech campus. Well, it turns out it's more challenging to do this on campus. At home the iPad and the Apple TV are on a single network, and in a campus environment that's usually not the case. We have some ways to solve this problem using OpenFlow, and it would require a little bit of OpenFlow in the access portion of the network. So technically it would mean putting at least one OpenFlow switch in the broadcast domain of the networks that serve our classrooms. So that's a little more incremental rollout to solve a niche use case.
We'll continue to explore niche problems, and at some point in time we would like to have a plan for an all-SDN OpenFlow campus network. We've been keeping in touch with the folks that make controllers to understand how their feature sets are evolving and trying to help them understand the kind of features we see as being required.
Most of the products that support OpenFlow today kind of back-ported that support into existing designs, and this year we're seeing products where OpenFlow was thought about much earlier in the design process, and so the products are more capable. You also see chip designers like Broadcom making advances, improving support for OpenFlow and helping inform the folks that work on the OpenFlow standards on how the abstraction layer can evolve in a way that is reasonably easy for them to implement.
So, for example, when OpenFlow went from 1.0 to 1.1, it changed quite a bit and included support for more than one flow table and the notion of a packet-processing pipeline, and I think those enhancements to the stack were informed by the folks that make the silicon to forward packets, and they basically helped the standards folks understand how the abstraction could be expanded at the same time in a way that they could do this efficiently in silicon and then give you an abstraction that was actually useful for implementing network features, so all of those things are happening.
It's not crystal clear when the right time to make the jump to requiring a certain level of OpenFlow support in switches will be. For example, we could say the next round of equipment we're buying has to support a certain set of features in OpenFlow, and if only one company could meet that requirement, then it really narrows our options in what we purchase. We want to wait until there's some choice. That will probably be within a year, maybe a year and a half.
Is there any doubt in your mind that within X number of years you will reach this kind of SDN nirvana, or might the whole movement fall apart at some point because of problems that we haven't perceived yet?
I think that in the large datacentre space there's no question SDN has already taken off. It's here to stay. And the notion of a centralized controller configuring all devices as a network, not configuring them device-by-device -- it is hard for me to imagine that not happening. I think OpenFlow is a really good start and there need to be improvements in the abstraction it provides, and there needs to be some market stability for people who want to write applications. So when you look at northbound APIs out of controllers and you look at how controllers support applications, there's a lot of uncertainty right now.
Any closing thoughts?
I recently sat down with Big Switch and asked them what models of different vendor switches would work best with their controller, and the response was, "We're real excited about the Trident II chipset from Broadcom." That was interesting to me because I know how our virtual machine guys approach upgrades. When they know they're going to be buying new servers for the virtual server environment, they'll ask the hypervisor supplier what chipsets best support the features we're going to be using. And then our guys will go look for the best packaging of that chipset architecture. And best packaging might include price, might include support, it might include the size of the servers, all kinds of things. But they'll lead with which chip architecture they need. And I think it's possible that that will become a more important component in folks' hardware selection for network switching devices. There are more questions to ask, but that may become a different kind of conversation and purchasing process.
Read more about lan and wan in Network World's LAN & WAN section.
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.