These are my notes for installing Cisco DNA Center: Things to look out for, things to check in the release notes.
Accounts to keep track of – in a password safe please, password recovery is painful.
maglev – this is your Linux root account. If this is gone you can recover it, but it’s painful. Access via SSH on port 2222 (!) or via KVM on the CIMC interface.
admin – your GUI superuser account
CIMC user – you set this before installing Cisco DNA Center. Don’t skip CIMC, you need to set parameters in there for the 10Gb ports to communicate with your switches, and for time sync.
Painless upgrades
The release notes will link to an upgrade guide. Read it and follow the instructions. For example, downloading applications before the system upgrade can make the system upgrade fail, and the guide will warn you of this.
Before the upgrade, run the Audit & Upgrade Readiness Tool, and resolve any issues it highlights.
If you are skipping releases, wait 5 minutes after each “Switch To” before taking another action. The system forces the wait from 1.3.1.x, but before that, you can get yourself in a situation that requires TAC intervention if you go too fast.
Cisco DNA Center as of 1.2.10 can stay busy up to 60 minutes after a System upgrade. If upgrading to Cisco DNA Center 1.2.10 or below, run this command as maglev SSH until it comes back “clean”, that is, empty.
Note this command is not needed when upgrading to 1.3 or above, as the updater itself makes sure user applications are back up. All it will show on 1.3 is “Completed” updater applications.
watch -n 30 "magctl appstack status | grep -v -E '([0-9]+)/\1.*Running'"
If you upgraded to Cisco DNA Center 1.2.8, a legacy app named ‘device-onboarding-ui’ stays installed. Uninstall it. If you do not, it will be part of your backup, and if you ever need to restore that backup to a freshly built Cisco DNA Center, the restore will fail, telling you that you are missing the ‘device-onboarding-ui’ app. Which you cannot install, as it is no longer part of the catalog from 1.2.8 on.
Installation caveats
The 10Gb physical cluster link is mandatory, even with a standalone unit. You cannot change its IP/subnet after the fact. Choose carefully. Do cable it so link is up. You may have SWIM and Assurance features failing if this link is down. 1m to 7m DAC cables are supported, refer to the installation guide. For a network, I’d choose a /28, just in case the current “L2 only, 3 devices only, not across DCs” design is opened up in future.
The Enterprise port is mandatory. Do configure it, and cable it to 10G. I expect that in future builds, all device configuration will be through the Enterprise port, and all GUI access will be restricted to the GUI port. For now, the easiest design is to manage the GUI through the Enterprise port.
On an “M4” appliance, DN1-HW-APL, the Cluster and Enterprise ports are configured as Trunk ports by default, but don’t have a VLAN set. Go through the Pre-Flight checklist in the installation guide and set the CIMC parameters, including NTP, the way that is recommended there. Keep an eye out for the recommended 802.1p settings on the connected switch if you are using VLAN 0, the default. Configuring the 10G ports as access ports is not officially supported.
Conversely, on an M5 appliance, DN2-HW-APL(-L/-XL), the only supported way of configuring those ports is as access ports.
The service and cluster service subnets need to be RFC1918 addressing, not used somewhere else in your network . They don’t need to be routable. Their recommended size is /21 each. These also cannot be changed without a complete re-image. Take two /21 from a range you do not use now and don’t ever see yourself using, say from somewhere in 172.16.0.0/12.
As of 1.2.8, it’s possible to enter these subnets with spaces in them, which will then cause Cisco DNA Center to not be functional, requiring another rebuild. Make sure there are no spaces in your IP addressing.
The CIMC port can have addresses in the same subnet as the Enterprise or GUI port, and still route correctly, as the CIMC port is entirely separate from the OS.
Configuration backups are always scp to a Linux server, and Assurance backups are always NFS to an NFS share, any OS. The only way to back up Assurance is to also back up configuration, which means you will need both: An scp target and an NFS target. Also, backups can be scheduled, but there is no configurable cleanup – “keep only the last N”. For that, you’d need a small Python script to do that via API.
The installation documentation is not entirely complete for firewall rules as of 1.2.10. You will also want, from managed devices to Cisco DNA Center, port 22 TCP ssh, and port 514 UDP syslog. The ssh port is used for scp during SWIM: The device “pulls” the image from Cisco DNA Center.
Keep in mind that the https proxy, if you have one, can only be accessed via http as of 1.2.10. You’ll configure http between Cisco DNA Center and the proxy; and the proxy will use https towards the cloud servers that are being accessed.
If you will be using PnP, keep in mind you need additional SANs in your server certificate – mainly pnpserver.localdomain, where “localdomain” is the one that DHCP assigns to devices. Similarly, if you are going to use a certificate signed by an Enterprise CA, make sure that your SANs include all IP addresses of your Cisco DNA Center cluster – yes, including cluster port addresses and including VIPs – and that you include any IP addresses and DNS names that devices may use to communicate with Cisco DNA Center, for example if you use NAT towards Cisco DNA Center or you use an FQDN with either DHCP Option 43 or Cloud Connector. Caveat: Public CAs won’t sign certs that contain SANs with RFC1918 IPs. If you are going to use a Public CA, not an Enterprise CA, give all interfaces public IPs. Yes that includes the 10G Cluster interface.
Speaking of DHCP Option 43, that one is for IPv4, and Option 17 is for IPv6. Initial connection to Cisco DNA Center should always be on http port 80; it will push your root certificate to the device, and then establish https port 443 for all further communication.
There is only one routing instance, which means there is only one default route out the Enterprise port. If you need to be able to reach remote subnets from the GUI port, configure static routes. Current recommendation by AS is to avoid the Cloud port until a future release. Alternatively, a static route to a https proxy will work.
A few more words on that. If you need the GUI port for internal compliance reasons, use static routes towards your management PCs. You only need to allow https in on that port. Keep in mind the subnets you are routing to will now no longer be able to reach the Enterprise port, and cannot be reached from the Enterprise port. If you have management PCs and managed devices interspersed in the same subnet, don’t use the GUI port.
If you need the Cloud port for compliance reasons, the easiest way to use it is to add a static host route (/32) towards your https proxy. If you are in the rare situation where you need to use the Cloud port but you are not going through a proxy, you could move the default route to the Cloud port, and then add static routes to the Enterprise port for your DNS and NTP servers, as well as all subnets that contain managed devices.
When using multiple ports, the Enterprise port remains the one that handles the bulk of traffic: DNS and NTP, as well as all managed device communication. The GUI port is only used for https incoming for GUI access; and the Cloud port is only used for https outgoing for cloud access – or rather, in most deployment scenarios, http outgoing to the https proxy.
Wireless map import from Prime Infrastructure
This works remarkably well, so long as some gotchas are avoided.
Imports larger than 50MB require Cisco DNA Center 1.2.6 or greater
Cisco DNA Center uses /
as a hierarchy delimiter. Similarly, exporting maps where locations, buildings or floors use a pipe symbol |
will not work. If your PI hierarchy has names with a forward slash or pipe symbol, say something like “Rue De Orange 15/16” or “Rue De Orange 15|16”, change those to be a dash -
, in this example “Rue De Orange 15-16”.
As of Cisco DNA Center 1.2.8, hierarchy import will not correctly import international characters. This has been resolved in 1.2.10.
You may want to edit your hierarchy CSV file, say because you never entered latitude/longitude in PI and want to delete that column entirely, or fix it. Do not open that CSV file in Excel with a simple double-click. Excel will take anything that looks like a number and strips leading 0s. Floor “01” becomes floor “1”, and now your map import for that floor fails because the name doesn’t match.
Instead, use Excel’s “Get Data from Text” feature. Depending on your version of Excel, you may have to change the data type of your GroupName column to “Text”. Save that as an Excel file, make your edits, then save as CSV for import. That way, your “00” and “01” entries will not become “0” and “1”, and your map import will work.
Device Support
There are two lists of supported devices. One specific to SDA, Software Defined Access. And one for Automation and Assurance features without SDA. Go by the SDA list if you are deploying SDA, and by the broader list otherwise.
Caveat: AireOS 8.8 MR1 (8.8.111) is incompatible with the SFTP server in Cisco DNA Center 1.2.8. Which means you cannot use SWIM to move off 8.8.111. I expect this will be fixed by 8.8 MR2.
High Availability
As of Cisco DNA Center 1.2.8, HA is supported for Automation, but not yet for Assurance.
HA requires three appliances. Clustering across data centers is not supported, and there are some technical reasons for that:
– 10Gbit interlink is needed in HA
– There are some latency concerns
– HA uses a consensus model among the three appliances. If Data Center B housing two Cisco DNA Center appliances goes down, Data Center A’s sole appliance won’t get consensus and will also disable itself. That defeats the purpose.
HA across data centers in the form of DR – Disaster Recovery – is likely just a matter of time. I expect we will see a design with three DCs for that: Cluster one, cluster two, DR orchestrator.
Virtualization
In a nutshell, that’s not a thing in production environments. DN1/2-HW-APL has 44 cores/88 threads, 256GB of RAM, 8TB of disk space across 8 SSDs. 2 x 10Gb interfaces, and a couple more that can be 1Gb or 10Gb depending on M4 or M5 appliance. Asking for those kind of compute resources typically gets a chuckle from the virtualization team, which is why Cisco DNA Center is delivered as an appliance.
DN2-HW-APL-L ups that to 56 cores/112 threads, and DN2-HW-APL-XL to 112 cores/224 threads.
If there is concern about the cost of the Cisco DNA Center appliance, look into the SKUs SDA-W-LAB and SDA-WW-LAB.
Scaling
As of Cisco DNA Center 1.3, the -L and -XL appliances allow for greater scale, see release notes. Cisco DNA Center will eventually scale horizontally – but until that feature is implemented, a cluster of three appliances has the same scale as a single appliance.
Visibility across Cisco DNA Center clusters
Also a work in progress. Keep checking release notes – this will be enabled in phases. Consider scaling a single cluster, if that fits your environment.
Idiosyncratic behavior
Cisco DNA Center can behave in designed but non-intuitive ways. Here are three such behaviors that you’ll want to be on the lookout for.
Day N templates gone after power-cycle
Day N templates aren’t followed by a “write memory” on IOS(-XR/XE) devices. That means a power cycle will wipe out the config you just pushed. This will become a checkmark, likely sometime mid 2020. In the meantime, either do a manual “wr mem”, or add it to your template. The template will need to be at the “root” of config mode. That is, if you’re in a sub-menu, get out to the root via exit, example:
interface GigEthernet 1/0/1 description "this is truly an interface" exit
And then, at the very very end of the Day N template, add this:
#MODE_ENABLE wr mem #MODE_END_ENABLE
You’ll want this in every Day N template. For composite templates, you could place this in its own member template that is placed at the very end of the composite.
Day 0 templates do not require this, they’re pushed differently.
Devices need cloud access to use the license tool
The license tool in CDNAC automates the token generation and registration of devices with the Cisco Smart Software Manager. It does not, however, function as an on-prem CSSM proxy. That means devices need cloud access to CSSM. That cloud access can be through a http proxy, which you’d need to provision via templates.
This will change early 2020, with CDNAC being able to point devices to an on-premise CSSM instance.
Composite templates require all member templates to be restricted (pre 2.1.2)
In CDNAC versions 1.3.3.7 and earlier, composite templates require a member template to be equally restricted. For example, if you have a generic SNMP template for “all switches and hubs”, and a composite template for a C9400, the composite template won’t allow you to add the generic SNMP template. Instead, the SNMP template would need to be restricted to just C9400 as well.