Skip navigation

Category Archives: Linux

Today a coworker asked me to take a look at our Munki repository because it was exhibiting some strange behavior with regards to Firefox. As of this writing, Firefox is at version 46.0.1, but Munki was insisting that version 42.0 was the most recent version.

Our Munki repo updates packages automatically as much as possible through Autopkgr, so I started by running through all the configuration options on there, but nothing seemed amiss. Further, in the directory tree used by Autopkgr, the earliest version of Firefox was 43.0.

Finally, I deleted Firefox 42.0 off of the machine that had precipitated this investigation and then checked the access logs for Apache on the repo server. It turns out that at some point in the past, Firefox had been imported into the repo at the base of the directory tree, repo/pkgs/, and that instance of Firefox was taking precedence over the Autopkgr-updated version residing at repo/pkgs/apps/firefox/.

So there you have it. If a package on your Munki repo seems stuck at an older version no matter how many times you download a newer version, check to see if there’s another version of that package higher up the directory tree.

Advertisements

If you administer a Google Apps domain, for education or otherwise, you really should be using GAM, the Google Apps Manager (and you should really be using it from a *nix or *nix-like environment). GAM is a command-line tool that lets you administer virtually any aspect of a Google Apps domain.

So why should you run it from *nix? Because awk. If you’ve ever had a big csv file that you needed to work with from the command-line and ended up writing a big old bash script that you probably weren’t going to use ever again, you’re in the target audience for awk. With awk, you can fire off beautiful *nix style one-liners like it ain’t no thing. You can pipe the output of other utilities through awk, or you can write awk scripts just like you’d write bash scripts (except the hash-bang at the start would read /usr/bin/awk rather than /bin/bash).

So, an example of the power of the two together:

I need to suspend a large group of users and move them to a different OU within my domain. All these users are currently in the same OU, so I can just dump all the info for the OU, grep for email addresses, and use awk to fire gam for each user found. In bash-land, this would probably mean dumping the users into a csv, then writing a script and passing in that csv. That’s a lot of work. in awk-land, though, it’s just one line:

gam info org /name/of/ou/containing/users/to/modify | grep @domain.tld | awk '{system("python /path/to/gam.py update user " $1 " org /name/of/new/ou suspended on")}'

That’s it.

You can learn about awk here.

So our initial notifier messaged us when players were on our Minecraft server, but it always messaged us, even if we already knew there were players online.  That’s not brilliant.  Instead, let’s have the script create a little file that says there are players online.  The pseudo-code then would be:

if the .players file exists and isn’t empty
check if players are still online
if they aren’t, clear the .players file contents, otherwise do nothing
if the .players file doesn’t exist or is empty
check if players are online
if they are, write status to the .players file and message

In this case, the script ends up being a lot simpler-looking than the pseudo-code.

#!/bin/bash

if [ -s .players ]; then
lsof -iTCP:25565 -sTCP:ESTABLISHED > .players
else
lsof -iTCP:25565 -sTCP:ESTABLISHED > .players && echo "Players online" | /usr/bin/ssmtp email@domain.com
fi

Of course, this script will tell you when you log on to the server yourself, which you probably don’t need to know and might be annoying, but it’s getting there.

After the revelatory nature of the information I shared earlier this week, I felt on top of the world, but that illusion quickly shattered when I attempted to upgrade some of our newest (but still autonomous) access points, only to have my tftp requests time out.  A quick ? showed me that I could instead use scp (which has made appearances on this blog before), but the syntax was left as a mystery to me.  I have finally found the syntax, though (hint: it’s not quite the same as the normal *nix command) and have had considerable success upgrading our remaining autonomous units with that method.

Whereas with tftp, you simply entered the server address followed by the path to the file (relative to the tftp server folder), the Cisco version of scp is a bit more complicated.  My main tripping point was discovering what the file path for the image being downloaded was relative to.  I assumed it would start at the root of the filesystem, / but instead the path is expressed relative to the home folder of the username specified.  I don’t know if using ../ will let you back out of your home folder, but it’s simple enough to copy the image to your home folder.  So, to use scp to download an image from your machine to a Cisco access point, you would use

archive download-sw /reload /overwrite scp://username@server/path/to/image.tar

where the image path is relative to the home folder of username.

That’s all.  Happy scp-ing!

When I took on the role of Systems and Network Administrator at my work, we had been using a Linux-based software firewall as the backbone of our network.  In fact, up until about seven weeks ago, we’d been running the same hardware box for around seven years.

Then it crashed.

Luckily, the crash was caused by a problem with the motherboard, and we had a recently-decomissioned server running on the same hardware that we could just swap in without too many problems (though I did spend most of a weekend at work getting everything back up and running).  After we got the firewall back up and running, though, more and more problems started coming out of the woodwork.  Our RADIUS-based MAC authentication was spotty sometimes, and whole classes were unable to access the network.  It was clear that something had to change, but until we could isolate the problems, we couldn’t even start.

Consultants were consulted, outside eyes looked over our infrastructure, and there was an “aha!” moment.  The Linux firewall, the heart of everything, had become insufficient.  Every year, we’ve added more devices, and with the pilot of a one-to-one iPad program in our middle school, we had hit the breaking point.  If our firewall had only been doing firewalling and routing functions, we might have been able to go on for another year, but with iptables, RADIUS, squid caching, routing, and DHCP all running on the same box, with pretty much all of our traffic making several trips through the one internal interface, the system bus, and asking for CPU clock cycles on our firewall, there was no way that we could sustain the model indefinitely.

So what did we do?  We made a major overhaul of our core infrastructure, moving different services to different hardware.  You can (for a pretty penny) get switches that do both layer-2 switching and layer-3 routing at line speed.  We had a firewall appliance that had never been fully deployed before precisely because it takes a lot of work to break out all the services we had running on our firewall and keep everything running smoothly without the end-user noticing a change.  Of course, with such a big change on an inherited network, there are things that didn’t get caught right away, but that always happens.  After some late nights, our network has smoothed out to the point that I’m not just putting out fires constantly.

But where does this leave the Linux firewall?  While I have a working, if somewhat limited knowledge of Cisco switching, wireless, and internet-telephony solutions, their security appliances and layer-3 switches are mostly foreign to me.  I won’t claim that I’m an expert with iptables, but I knew my way around the command well enough to maintain things.  But the question is larger than this one case.

For small and even medium businesses, a Linux firewall is probably still the best, most economical choice if you have a serious network, as long as you have or are willing to gain the appropriate Linux wizardry, with the caveat that that box should only be doing firewalling and routing.  If you have other services that you need to run on your network, put them somewhere else, especially if you’re running something like RADIUS, where timely response packets are required for authentication.  However, if you’re supporting many hundreds of devices across multiple VLANs and expect to expand even further, a hardware-based solution will be a better investment in the long run, even if it’s a greater initial expense.

In the summer months this year (and hopefully more summers in the future), my office will be getting some student interns who will work for us for half of the day and then learn things from us for the second half.  One of the first lessons I’m planning is a crash course in Linux.  There are, of course, about a million different distros available, from mainstream releases like Ubuntu/Debian and Fedora to more–specialized releases, such as RebeccaBlackOS.  For my purposes, though, I’m just going to focus on Ubuntu and two variants (the MATE and Cinnamon versions of Linux Mint) because, well, I’m most familiar with Ubuntu and some of its quirks, and while installing a new OS will be part of the first project, I don’t want to spend all of my first class just working through stupid install issues that I can’t help solve quickly.

But I’m offering several different variants because most of what I hope to teach will be happening on the command-line, and it won’t hurt these kids to get to make a few choices about their desktop environment.  I like Unity quite a bit at this point (though I did initially downgrade from 10.10 netbook to 10.04 because GNOME 2 was a lot more stable back then), but I understand that there is a learning curve, which is why I’m offering the more Windows-like Cinnamon and MATE, a fork of GNOME 2 for those who might like a more classic Linux feel (not that I expect any of them to have any working knowledge of Linux coming in to the project).

For those of you who might like to play along at home (I plan to share some of my lessons here if I think they’re any good as a learning tool), I’m starting everyone out with a pretty basic load-out beyond the basic install.  I’m asking everyone to install Guake, my favorite Quake-like drop-down terminal emulator, and Vim, because emacs is for losers and Nano, the default text editor, is no better than just using Notepad.  If you’ve never used Vim before, you should probably go download it and run vimtutor from the command line so that you can get the basics.

So if you’re an Ubuntu user and you use Unity for your desktop and Chromium (or Chrome) as (one of) your browser(s), maybe you’ve been bugged by the placement of the window buttons when you don’t have the window maximized. If you want some consistency, there’s an easy fix that you can run from the command line. If you want to change the placement or order of your window buttons in Chromium, just open up a terminal window and enter

gconftool-2 --set /apps/metacity/general/button_layout --type string "minimize,maximize,close:"

Before

Before

After

After

 

Note that the code above will move the buttons to the left but won’t put them in the right order. If you want to put them in the same order as the window buttons on all your other programs, change the order so “close” is the first item in the above command. The colon in that string indicates the placement of the buttons, so if you want to move the buttons back to the right, move the colon to the beginning of the string (and of course reorder the buttons, unless you like a little bit of inconsistency just to screw with other users on a particular machine).

Credit goes to Leet Tips for the command, though not the explanation (that was me).

So I may be late to the party for Ubuntu 12.10 Quantal Quetzal, but as someone who just upgraded from 12.04.02 Precise Pangolin, I found an important issue affecting my crontab: the path to external media devices such as the backup hard drive I have plugged into my machine has changed.

Previously, if I’d wanted to access the contents of the external disk SCALZI from the command-line, I would have simply gone to /media/SCALZI/; this, however, is no longer the case.  In the new scheme, this media mounts at /media/[username]/[device]/, so in the case above, I would instead go to /media/hilary/SCALZI/.

The more you know.

(Yes, I name my external media after authors.  I have, among my collection of flash drives, Scalzi, Asimov, Gaiman, Pratchett, and Pierce.)

Oh, yeah!  Pie charts, baby!

Oh, yeah! Pie charts, baby!

While I still haven’t gotten many chances to really put it through its paces, I really love a lot of aspects of the MR12 Meraki sent me.  One of my favorite features is the ability to get a lot of granular detail on the network traffic clients are getting through the AP.  There are places where raw data is fine, but a lot of the time, I want a nice visual representation just so I can get a quick idea of what I’m working with.  This is especially true when we have weird hiccups or slowdowns on our network in certain areas.  Unfortunately, I don’t have Meraki APs everywhere, so I can’t just pull up a lot of sexy data and quickly figure out what’s up.  I’d really love to be able to do that, though.

So what does a sysadmin do when there’s a need but not a solution he knows?  Google, duh.

And what does google give me?  It gives me ntop.  If you are a knowledgeable user and not just a luser, you have probably used top before to find out what processes are using the most of your memory and processor at any given time.  Well, ntop is something like that, only for networks.  But, more than that, it can give you nice graphical representations of your data through a web interface.

Having just run across ntop this very hour, I haven’t dived into the man pages for it yet, but I have written a long command chain so that I can read the things without standing in front of my Linux terminal, and, because sometimes I just want to write a long command, here’s what I did:

man -t ntop > ntop_man.ps && ps2pdf ntop_man.ps && rm ntop_man.ps && scp -P 22 ~/ntop_man.pdf [user]@[lappy]:/Users/[me]/Documents

So there.

(Of course you can expect me to post more about ntop as I dive in and find out what it can do for me and, by extension, what it might be able to do for you.)

Well, I said that I was going to build a server to do a larger-scale test for AirPrinting from iOS devices here on campus, and by gum I did.  Right now, under my desk, there’s an Ubuntu 12.04.1 Precise server named Goodmountain (see what I did there?) whose only job is to serve up AirPrint printers for campus iDevices.

Nothing more glamorous than an old ThinkCentre shoved under a desk, amiright?

Here it is (center).  Nothing more glamorous than an old ThinkCentre shoved under a desk, amiright?

So how’s it working so far?  Well, I’ve printed a couple pieces of short fiction from my iPad to two of the printers that I’ve made available for this pilot program, and everything’s gone just fine.  There aren’t any students around this week, and I don’t know how much demand there’s been for iPad printing.  For this pilot, I’ve only made four printers available in the locations where the iPads get used most often, and I’ll be waiting to see if there’s more demand and/or how much the service gets used before I do anything else.  Top on my list of priorities is moving this to an actual server that isn’t hanging out under my desk, but that only happens if this is something that there’s heavy demand for.

Now, what have I learned?  Well, top on my list is that the Ubuntu server installer doesn’t recognize full-sized Apple SATA drives (or at least drives that have been pulled out of the bin and have an Apple logo on them–I don’t actually know if it’s the drives or something about the partitions, and I don’t care to test that right now).  More important than that, though, is that if you’re going to be serving multiple printers, you need to have a separate .service XML file under /etc/avahi/services/ for each printer or none of them are going to show up.  If you’ve already built a nice big file for all your printers and you need to cut it up into a bunch of individual files, just consider it more Vim practice.  Yank is your friend.

Expect to see another report here once I have some usage statistics.