Networking in SC January 2012

Tags: 

Networking Upgrade - Abridged

When completed in early January 2012 the College network became ten times faster (1GB to the desktop, 10 GB to the building, 40 Gb core) with new redundant core electronics, new redundant fiber optic building cabling, uninterruptible power from the emergency generator, an improved feature set, and enhanced security.

Networking Upgrade - Unabridged

2011 was an eventful year for Engineering College networking.  After several years of planning, extensive vendor technical presentations, and many hours of hands on testing, we will complete our $1.3M (list) networking upgrade over the Winter break on 4-6 January.  The upgrades involved replacing all networking electronics and fiber optic cabling in the Seamans Center and ERF buildings.  By clever use of redundant connections and hardware manipulation, we were able to exchange many of the hardware and cable replacements without any network disruption. For example, last summer we upgraded all of the SC fiber optic cable with higher quality OM4 cabling in order to accommodate the higher 10 GB transmission speeds without any disruption of service (and without users knowing that a swap was going on).  Unfortunately the network hardware (called a switch) that we will replace in January is not redundant.  This is the hardware at the other end of the wall jack that building computers directly connect to when they are on the network.  Because they are not redundant, replacing them requires a down time.  We were also able to replace some 500 server room cables without a service disruption.  Due to some physical circumstances that made some closets simpler, we have already upgraded the west wing, ground floor, and fifth floor; the down time to upgrade first floor occurred at the end of the summer. 

The new Juniper network switches have many interesting features, a few of which I will discuss.  The most obvious feature is that instead of a large chassis with several plug-in power supplies and many plug-in network cards the Juniper switches are virtual chassis assembled from up to ten self-contained switches where each physical, 48-port switch becomes a blade of the virtual chassis.  Each blade is interconnected via dual PCI busses to the two switches logically next to it in the physical stack. Thus two communications failures are required to isolate any given blade of a virtual chassis. Another interesting feature is that these are 1GB ports, which are ten times faster than the 100MB network ports being replaced. There are two 10GB uplinks per virtual chassis configured as a trunk connecting to two different blades. A network trunk is a grouping of two or more network cables between two network devices. Once a trunk is configured, network traffic actively uses all of the available cables in the trunk effectively increasing the bandwidth (amount of network traffic per time).  A trunk provides redundancy because as long as at least one cable is operational, trunk communications will be viable.

To accommodate the new switch on each floor we had an electrician bring in two new 208 VAC  circuits per data closet.  One AC circuit was to feed an Uninterruptible Power Supply (UPS) and the other circuit was to directly provide building power. The UPS units have batteries and AC inverters that provide 208 VAC power during the interval between power going out and the power generator starting up.  In these closets each switch has two power supplies but they only require one to operate. Thus the power supplies in each blade of these virtual chassis are redundant.  The UPS and building power circuits feed outlet switchable and monitoring Power Distribution Units or PDUs.  This enables us to remotely monitor and control the AC power to each individual power supply in every piece of network gear.  Any problems or abnormalities are reported via email and monitoring software with critical alarms that call support staff mobile phones. 

The building network design is a star configuration using a single, central switch with a 20 GB trunk to each data closet switch.  Long-haul, single-mode, fiber-optic cables support 10 GB networking to SHL and 20 GB networking to the data closet in ERF.  One GB networking supports the HWTA & HLMA buildings on the river and the HOA1, HOA2, & HWBF buildings on the Oakdale campus.  We also have a 20 GB network connection to a business continuity / disaster recovery site.  The central switch also has a 20 GB trunk to the Engineering College router and a 40 GB trunk to the server room.  The central building router and switch are obvious critical network components as are the server room network switch and the server room Storage Area Network (SAN) switch.  Because of the stacked nature of these virtual chassis, however, we were able to split the trunk cables to two different blades such that the loss of any one blade would cut the bandwidth in half but networking would be maintained.

Note how the trunked interconnections and redundant hardware keep the network viable even if one of the components (blue boxes) is removed. Although the graphic does not show it, the servers and the storage also have features that make them more robust as well. For example several physical servers host multiple virtual servers that can be configured to automatically change hosts during a crisis.

One exciting feature of the new network switch hardware is the switch ability to do certificate- based network access control (NAC).  We currently use each computer’s network identity to authorize its connection to a specific network port. This identification must be changed when you move your computer to another port in the building. When we can implement NAC we would issue a certificate (a signed and dated security token) to a computer or user. The certificate has the ability to be used to dynamically assign the proper configuration to whatever port the computer gets plugged into.  Thus anyone could move a computer to any port without contacting us.

January 2012