Skip navigation

Tag Archives: Cisco

As part of studying for my CCNA Routing and Switching certification, I set up a home lab: a couple of old Cisco routers and a couple more switches, all second-hand from eBay.  Let me tell you, if you can afford to do that (my lab came in around $150 with cables and a couple interface cards), or if you can get your job to pay for your lab, there’s nothing quite like actual hands-on time with equipment (and in some ways it’s less hassle setting up real equipment than it is configuring GNS3 properly so it doesn’t eat all your memory).

But.

You may encounter an issue with used equipment.  My routers seemed not to hold their configuration between boots, no matter how many times I told it to copy run start.

It would appear that this isn’t that uncommon of a problem, though, and it’s an artifact of the way that eBay sellers wipe equipment before sending it out.

So, if your used router boots to the initial configuration dialog every time, check this out.

On boot, cancel the dialog and enter privileged mode, then run show start. If the startup configuration shown is the same as the configuration you were running when you last shut down your router, check the Configuration Register by running show version. The Configuration Register will probably show 0x2142, which means the router is bypassing the startup config that’s stored in NVRAM, something that’s often invoked during password-recovery.

Fixing this is easy. Enter global configuration mode (conf t) and type config-register 0x2102, then end (or ^Z if, like me, you’re lazy). Another sh ver should now report

Configuration register is 0x2142 (will be 0x2102 at next reload)

Now just reboot your router (reload) and you’re back in business.

Note: if this solution doesn’t work, well, I’m sorry.  For a lab environment, any lost configs are just another opportunity to practice, but if you’re using this equipment in a production environment, I hope that you’re backing up your configs.  If you’re not backing up your production router and switch configs, check out RANCID, the Really Awesome New Cisco confIg Differ.


Two Cisco 2950 switches mounted above two Cisco 2620 routers on an Ikea Lack table re-purposed as an equipment rack.Final note: if you’re looking to build your own CCNA practice lab, the setup I’m using is

2x Cisco Catalyst 2950 switches

2x Cisco 2620Xm routers (if you can, the 2621 and 2611 are better than the 2620 and 2610 because they have two built in fastethernet ports rather than only one).

Ten Internet Points™ to the first person who can tell me what I’ve done wrong in this picture.

image

For those of you with rack-mountable hardware (servers, network hardware, and even pro audio gear) who are still looking for a cheap, stylish, modular solution to mount it all, look no further; the Dutch have found the solution, and the solution is Swedish.

It turns out that the LACK side table from IKEA is perfectly sized to hold up to 8U worth of 19-inch rack-mount equipment, which is astounding when you consider that it comes in twelve colors and costs $10 (USD).  If you go on Amazon, Monoprice, or CDW, you’re going to pay a minimum of $50, and the color choices are black, black, or black.  The grey-turquoise LACK even looks like it would go well with the weird blue-green-grey plastic that Cisco uses for their bezels.

While the LACK is the cheapest option, it turns out that someone at IKEA really cares about people who use rack-mount equipment, since there are lots of other options for furniture with an internal spacing of 19 inches.

More information about the LACKRACK (including an IKEA-style manual for assembly) can be found at <http://lackrack.org>.

After the revelatory nature of the information I shared earlier this week, I felt on top of the world, but that illusion quickly shattered when I attempted to upgrade some of our newest (but still autonomous) access points, only to have my tftp requests time out.  A quick ? showed me that I could instead use scp (which has made appearances on this blog before), but the syntax was left as a mystery to me.  I have finally found the syntax, though (hint: it’s not quite the same as the normal *nix command) and have had considerable success upgrading our remaining autonomous units with that method.

Whereas with tftp, you simply entered the server address followed by the path to the file (relative to the tftp server folder), the Cisco version of scp is a bit more complicated.  My main tripping point was discovering what the file path for the image being downloaded was relative to.  I assumed it would start at the root of the filesystem, / but instead the path is expressed relative to the home folder of the username specified.  I don’t know if using ../ will let you back out of your home folder, but it’s simple enough to copy the image to your home folder.  So, to use scp to download an image from your machine to a Cisco access point, you would use

archive download-sw /reload /overwrite scp://username@server/path/to/image.tar

where the image path is relative to the home folder of username.

That’s all.  Happy scp-ing!

A quick post today, but no less informative, I hope.

We just installed a brand new Cisco wireless controller, and that means converting our older, autonomous access points to lightweight mode so they can interface with the controller.  Cisco would like you to use their (Windows-based) tool, which I tried initially.  While it may be easier and faster in an ideal situation, those are so rare.  I looked around but couldn’t find a good text-based tutorial for doing the upgrade, but I did find some youtube videos, one of which brought me my solution.

Before you get started, you’ll want to collect a few things.  First, you’ll need a recovery image specific to your access point.  You can download the image you need from Cisco–you’ll want the recovery image, which will crucially contain the string “rcv” in its file name.  Download the image and move it to your tftp server root (if you’re using Ubuntu, there’s a good guide for setting up a tftp server here if you don’t already have one).  Don’t worry about extracting the tarball–the access point will handle that for you.

If you’re not upgrading your access points in place, you may also want a serial connection to the AP so you can watch its progress the whole time, but this is optional.  I use minicom for my serial terminal on Ubuntu, though you may already have a package you prefer.

Now that you’ve got everything in place, telnet (or ssh) into your access point (and enter enable mode, but not configure mode) and run the following:

archive download-sw /reload /overwrite tftp://(ip address of your tftp server)/(name of recovery image tarball)

After the access point finishes downloading the image, it should restart automatically, but if there are any unsaved changes lingering on the system, use reload to restart the switch.  The switch will reboot, and if you’re watching on your serial console, you should see the access point going through the process of loading the recovery image, contacting the controller, and then downloading a full image before finally restarting again and coming under full control.

Yesterday, I got a first-hand demonstration of how a simple, well-meaning act of tidying up can have far-reaching consequences for a network.

Our campus uses Cisco IP phones both for regular communication and for emergency paging.  As such, every classroom is equipped with an IP phone, and each of these phones is equipped with a switch port, so that rooms with only one active network drop may still have a computer (or more often a networked printer) wired in.  If you work in such an environment, I hope that this short tale will serve as a cautionary tale about what happens when you don’t clean up.

I was working at my desk yesterday afternoon, already having more than enough to do, since the start of school is only a few days away, and everybody wants a piece of me all at once.  While reading through some log files, a bit of motion at the bottom of my vision caught my attention: the screen on my phone had gone from its normal display to a screen that just said “Registering” at the bottom left with a little spinning wheel.  Well, thought I, it’s just a blip in the system–not the first time my phone’s just cut out for a second.  So I reset my phone.  Then I looked and saw that my co-workers’ phones were doing the same thing.  Must just be something with our switch, I thought.  So I connected to the switch over a terminal session and checked the status of the VLANs.  Finding them to be all present and accounted for, I took the next logical step and reset the switch.  A couple minutes later, the switch was back up and running, but our phones were still out.

Logging in to the Voice box, I couldn’t see anything out of the ordinary, and the closest phone I could find outside of my office was fully operational.  Soon, I began getting reports that the phones, the wi-fi, and even the wired internet were down or at least very slow elsewhere on campus, though from my desk, I was still able to get out to the internet with every device available to me.  The reports, though, weren’t all-encompassing.  The middle school, right across a courtyard from my office, still had phones, as did the art studios next door, but the upper school was down, and the foreign language building was almost completely disconnected from the rest of the network–the few times I could get a ping through, the latency ranged from 666 (seriously) to 1200-ish milliseconds.

I reset the switches I could reach in the most badly affected areas.  I reset the core switch.  I reset the voice box.  Nothing changed.  I checked the IP routes on the firewall: nothing out of the ordinary.  Finally, in desperation, my boss and I started unplugging buildings, pulling fiber out of the uplink ports on their switches, then waiting to see if anything changed.  Taking out the foreign language building, the most crippled building, seemed like the best starting point, but was fruitless.  Then we unplugged the main upper school building, and everything went back to normal elsewhere on campus.  Plug the US in, boom–the phones died again–unplug it, and a minute later, everything was all happy internet and telephony.

We walked through the building, looking for anything out of the ordinary, but our initial inspection turned up nothing, so, with tape and a marker in hand, I started unplugging cables from the switch, one by one, labeling them as I went.  After disconnecting everything on the first module of the main switch, along with the secondary PoE switch that served most of the classroom phones, I plugged in the uplink cable.  The network stayed up.  One by one, I plugged cables back into the first module, but everything stayed up.  Then I plugged the phone switch back in, and down the network went again.

After another session of unplugging and labeling cables, I plugged the now-empty voice switch back in, hoping for the best.  The network stayed up.  Then I plugged in the first of the cables back into the switch.  Down the network went.  Unplug.  Back up.  Following the cable back to the patch panel, we eventually found the problem, missed on my initial sweep of the rooms: two cables hanging out of a phone, both plugged into ports in the wall.  For whatever reason, both ports on that wall plate had been live, and that second cable, plugged in out of some sense of orderliness, had created the loop that flooded the network with broadcast packets and brought down more than half of campus.

Take away whatever lesson you want from this story, but after working for almost four hours to find one little loop, I will think twice about hotting up two adjacent ports if they aren’t both going to be connected immediately and (semi)permanently to some device, especially if one of them is going to a phone.

As part of my efforts to expand and strengthen our wireless infrastructure on campus, I swapped out one of our aging Cisco Aironet 1200s for an Aironet 1250 last week.  In theory, this was fine and a great thing to do.  In practice, I got a call just as I was walking to work saying that classrooms in that area were reporting problems with their WiFi.

I checked the usual suspects, made sure that DHCP was running fine, made sure that the RADIUS server was up and running, and tried several times, in vain, to correct the presenting problem from the web-end.  At a loss, I went back and switched out the new AP for the old one just as a hold-over until I could figure out what was up.

Then, while doing the initial configuration for a new Meraki access point that I’m going to be testing out as soon as it shows up in the office, I realized what the missing piece of the puzzle was: the shared secret.

If you’re using a RADIUS server for wireless authentication, each client (access point) needs to have a shared secret that both it and the server know in order for any authentication to happen.  If the Aironet 1250 I had put in had been totally new to us, then I wouldn’t have run into this problem because I would have entered the shared secret during the course of my initial configuration, but this AP was formerly located elsewhere on campus, so all I had done was to change the IP address for its ethernet interface to prevent conflicts.

Now the AP had the correct Shared secret, and everything is as it should be again, but let this be a lesson to all of you: share your secrets.