Over the last two years or so, I have been on adventure with Data Centre Infrastructure renewal. As past posts may allude to, ACI was a big part of what we did, but before anyone gets all dogmatic about it, know that we didn’t go “All in” with that one product, since I personally don’t subscribe to the “DC Fabrics cure all ills” mantra.
CLOS fabrics and the various approaches to overlays within them are great at providing stable platforms with predictable properties for speed, latency and scale.
ACI brings with it many different constructs for operating networks, some of which have analogous equivalence with classical networking, some of which are literally bat-poop crazy.
As per usual, you can find lots of resources on how to structure ACI fabrics elsewhere, i’m not going to waste time on what you can do and focus on what I am going to do (roughly).
The below Image was unceremoniously stolen from Cisco themselves, in the critical read ACI Fundamentals
Before I get too wound up I should probably say that all of this was directed to my friends there first, and whilst I won’t say much about their thoughts, I don’t think this is particularly new to them, or out of place.
I have a fondness for ACI. I think its innovative, and once you break through the naming conventions and the terminology, it’s exactly what I think Enterprise should be doing in terms of Next Generation Networking.
Plumbing ACI is something that YouTube has you covered on. I wont reinvent that wheel. For the initial standup, I am doing the bare minimum connectivity; each leaf has one 40G uplink to each spine, meaning, 80G of North/South Bandwidth. This will double up when we are preparing for Production service, matching my UCS/FI Bandwidth between each Chassis (4x10G links to each side of my 2208XPs). My 3 APICs are configured as follows:
On Friday last week we rolled out our ACI solution into one of our DCs. The setup is simple, comprising of;
2x Nexus 9336pq "Baby" Spines 4x Nexus 9396px Leaf Switches 3x APIC Controllers 2x ASA 5585x Firewalls The compute behind it is UCS based and we have F5 LTMs in the ADC role.
Over the weekend I provisioned it. That did not go well. Today I had to go back and revisit the cabling, and then the Fabric initial setup, and then redid the entire thing from scratch again.
Starting yesterday I began to deploy our Nexus 9000 ACI solution into our Datacentre. Scary yet fun times are ahead.
Over the course of the project I will do my best to chronicle anonymised info that talks about what we did and how we did it. Some of that may be of use to another ACI hopeful, whereas some will be pretty specific to my environment. One thing I won’t be doing is reinventing the blogging wheel, and I will chose to refer to others that helped me, rather than rehash the same subjects over and over again.
Oh how the world has changed since I started out in the wonderful trade.
We used to have VLANs and subnets; switches, routers and firewalls. People would moan things didn’t work and we did a traceroute to figure out why. We would bash out a fix, and if it broke, we would bash out another. It was the wild west, and that was fun. Cowboy hats were standard issue.
Then along came the bad guys, and with them, the policy doctors.