So I bought my ACI bundles so long ago that they're still running 1.0(3f). Right now mainline is 1.2(1k), so i'm a bit behind.
Using the official Cisco doc I did the first staged upgrade from 1.0 to 1.1 using the Web GUI. I wanted to see what happened in a visual sense.
Basically you setup a connection between the APIC and a host that has staged the firmware files, then you setup a policy defining what versions the fabric should be on, and when that should be made active. For me it was 1.1(4f) and now basically.
This caused the APIC controller to go an pull the image off the staging server (I used a windows box with IIS, but you could easily use a Linux box with SSH enabled too. Once it had pulled the image down, it started to apply that to APIC1, then 3, then 2 (it does say its random). Each cluster node updated, rebooted, rejoined the cluster, and soaked in, before the next controller started the process. It was about 15 mins per controller.
So, whilst I was waiting for that to happen, I started to look at the python equivalents for these GUI commands. When I come to the other DC, and indeed upgrades later down the line, I would like to avoid all this clicking. The spec is simple:
Wednesday, 27 January 2016
Monday, 18 January 2016
ACI: Initial design considerations
ACI brings with it many different constructs for operating networks, some of which have analogous equivalence with classical networking, some of which are literally bat-poop crazy.
As per usual, you can find lots of resources on how to structure ACI fabrics elsewhere, i'm not going to waste time on what you *can* do and focus on what I am going to do (roughly).
As per usual, you can find lots of resources on how to structure ACI fabrics elsewhere, i'm not going to waste time on what you *can* do and focus on what I am going to do (roughly).
Friday, 15 January 2016
ACI: A mini rant at INSBU
Before I get too wound up I should probably say that all of this was directed to my friends there first, and whilst I won't say much about their thoughts, I don't think this is particularly new to them, or out of place.
I have a fondness for ACI. I think its innovative, and once you break through the naming conventions and the terminology, its exactly what I think Enterprise should be doing in terms of Next Generation Networking. That said, INSBU are not helping themselves penetrate the market, and as such, are putting themselves at risk of falling behind to Openstack.
I have a fondness for ACI. I think its innovative, and once you break through the naming conventions and the terminology, its exactly what I think Enterprise should be doing in terms of Next Generation Networking. That said, INSBU are not helping themselves penetrate the market, and as such, are putting themselves at risk of falling behind to Openstack.
ACI: Rack&Stack - with falling at the first hurdle
Plumbing ACI is something that YouTube has you covered on. I wont reinvent that wheel. For the initial standup, I am doing the bare minimum connectivity; each leaf has one 40G uplink to each spine, meaning, 80G of North/South Bandwidth. This will double up when we are preparing for Production service, matching my UCS/FI Bandwidth between each Chassis (4x10G links to each side of my 2208XPs). My 3 APICs are configured as follows:
You start the ACI journey on the APIC1 CLI (I used rear console, but you can use the VGA if you like).
APIC1 e2-1 -> Leaf 1 e1/48
APIC1 e2-2 -> Leaf 2 e1/48
APIC2 e2-1 -> Leaf 2 e1/47
APIC2 e2-2 -> Leaf 3 e1/47
APIC3 e2-1 -> Leaf 2 e1/48
APIC3 e2-2 -> Leaf 3 e1/48
Tuesday, 12 January 2016
ACI: The Setup.
On Friday last week we rolled out our ACI solution into one of our DCs. The setup is simple, comprising of;
Over the weekend I provisioned it. That did not go well. Today I had to go back and revisit the cabling, and then the Fabric initial setup, and then redid the entire thing from scratch again. Oops.
Lets hope the rest of the journey is a bit less fraught eh?
- 2x Nexus 9336pq "Baby" Spines
- 4x Nexus 9396px Leaf Switches
- 3x APIC Controllers
- 2x ASA 5585x Firewalls
Over the weekend I provisioned it. That did not go well. Today I had to go back and revisit the cabling, and then the Fabric initial setup, and then redid the entire thing from scratch again. Oops.
Lets hope the rest of the journey is a bit less fraught eh?
Saturday, 9 January 2016
The ACI Adventure begins
Starting yesterday I began to deploy our Nexus 9000 ACI solution into our Datacentre. Scary yet fun times are ahead.
Over the course of the project I will do my best to chronicle anonymised info that talks about what we did and how we did it. Some of that may be of use to another ACI hopeful, whereas some will be pretty specific to my environment. One thing I won't be doing is reinventing the blogging wheel, and I will chose to refer to others that helped me, rather than rehash the same subjects over and over again.
Enjoy (or not, wotevs)
Over the course of the project I will do my best to chronicle anonymised info that talks about what we did and how we did it. Some of that may be of use to another ACI hopeful, whereas some will be pretty specific to my environment. One thing I won't be doing is reinventing the blogging wheel, and I will chose to refer to others that helped me, rather than rehash the same subjects over and over again.
Enjoy (or not, wotevs)
Subscribe to:
Posts (Atom)
node_exporter in VyOS 1.4
So it turns out that if you want metrics from VyOS, your two options are SNMP or Telegraf (towards InfluxDB). SNMP is one of those things t...
-
Updated Jul 2022: Following an exchange on Twatter it was clear to me that my explaination around the IPv6-PD usage was not very good, so I...
-
So it turns out that if you want metrics from VyOS, your two options are SNMP or Telegraf (towards InfluxDB). SNMP is one of those things t...