Sunday, November 30, 2008

Gift ideas for you and the other sysadmins in your life

I meant to post this on Friday, but I was busy recovering from standing in line for a couple of hours to buy a new TV for $400.

Every Christmas people ask me what I want, and I always give the kneejerk response, "I don't know, nothing?". I usually can't come up with something that I genuinely need or want typically, though there are lots of things I'd be pleasantly surprised by. I'm really not hard to shop for, but I think that people think I am. I'd be happy getting nothing, or even just a card.

Anyway, I set out this year to try to compile a list of things that they can get without worrying whether I will like it. I found some neat stuff online, and thought that you might be interested in the same things I am, so I've compiled a list of stuff, or more like a list of lists. If someone wants to know what to get a geek, just hand them this page.

First, since I just got my brand new HDTV:

I'm a Browncoat and not ashamed of it. I've got the series on DVD and I watch it pretty often. Watching it on Blueray will be pretty sweet, and if that special geek in your life digs SciFi, you can't go wrong with Firefly. (click the box to go to the Fox store)

If there's anyone reading this who isn't familiar with ThinkGeek, you should click the logo and check it out. It's Geek nirvana. Everything is great there, and at one point most of my wardrobe was from their T-shirt section. I was going to give a couple of categories, but really, if it's on that site, there's a good chance your geek will like it.

Sysadmins as a general rule really like to learn. A lot. To that end, here's a link to every product on Amazon with the tag of sysadmin. Lots of great books. If you find yourself getting lots of stuff from Amazon, it's probably cost effective to subscribe to Amazon Prime, so your shipping is free or much cheaper.

Even Wired is getting into the season with this list of Geeky toys that will make you a christmas hero. Some of these are on the expensive side, but anyone who does manage to hack the triceratops gets kudos from me.

I'll end with one of the coolest lists I found: Make's Open Source Hardware 2008. If you're a hardware tinkerer, then this is your list. I'm *not* a hardware hacker, and I want to get some of these. Excellent stuff.

anyway, just some Sunday fluff to fill space. Hope you're having a good weekend.

Now with smaller inline image. Sorry!

Saturday, November 29, 2008

More LVM information

I talked about Logical Volume Manager in my Intro to LVM in Linux.

Tonight I came across an article on backing up LAMP stacks with LVM snapshots.

I knew LVM could do it, but I wasn't aware of the particulars . Justin Ellison's article on it clarifies many off the difficulties with the process. His particular howto is geared towards LAMP (Linux, Apache, MySQL, PHP) setups, but it is by no means limited to it.

Read through his write up and let me know what you think.

One thing I am interested in seeing is how well it scales. He mentions 500MB of data, which is around 1/600th of the size I'm dealing with. I do have to wonder how quickly I could create a snapshot of that amount of data.

Anyone have more experience with this?

Friday, November 28, 2008

Quality Assurance vs Quality Control

Are you good at finding faults in your infrastructure, or are you good at making sure there are no faults. As Jason Cohen relates, Quality Assurance is not Quality Control.

Like many other topics, this is written to programmers, but is a good lesson for sysadmins as well.

Doing sysadminy things with Windows PreInstalled Environments

I was, until recently, unfamiliar with the concept of a Windows PreInstalled Environment. For those of us who are primarily Unix based, this is basically like a live CD that boots straight into Windows.

There are a few of these PE CDs available. You can use Microsoft's Preinstallation Environment, or maybe the UBCD4WIN (Ultimate Boot CD for Windows), but the one that seems to get the lion's share of attention is BartPE. There's also REAtoGo, which seems to be a customized BartPE disc. To be completely honest, I haven't used any of these yet, but I'm looking forward to trying it.

Whichever you go with, building the CD seems to be a similar process. You use your own Windows install disc and customize the software through slipstreaming.

Once you've got the disc setup the way you want, it becomes easy to administer your Windows server using it as a known-clean boot. Virus cleansing is risk free, you've got the full gamut of useful Windows recovery tools at your service, and Earnest Oporto used it to update his firmware. What a great idea. How often do you see stuff like that which requires Windows? Sure, there are ways to update that particular firmware without Windows, but for lots of hardware, there isn't. This is a viable solution in that case.

Since I'm woefully unexperienced in this department, I'll appeal to you. Have you ever used a Windows PreInstalled Environment? What types of things do you do with it? Any tips or tricks?

Thanks for sharing!

Thursday, November 27, 2008

Happy Thanksgiving!

Here in the United States, it's Thanksgiving today, and I've I'm off on holiday to visit my wife's family in Cincinnati.

It's customary to reflect on the things that we're thankful for, so I thought that I'd share some here. Hopefully you've got some things that you're thankful for as well.

I'm thankful for:

My family and friends, even if I don't get to see them all as often as I'd like
My health, while not being the best, is better than a lot of people's
My profession, because I get to learn and grow in it

There are lots of other things that are small in comparison with those, but I really do appreciate the blessings that have allowed me to become who I am and do what I do.

Today, whether you get to spend time with the people important to you or not, reflect on what you're thankful for, and consider those who are less fortunate.

Happy Thanksgiving!

Tuesday, November 25, 2008

The case of the 500-mile email

This is a great story about someone who had a user who couldn't email anyone over 500 miles away.

This is why being a sysadmin can drive people crazy.

Monday, November 24, 2008

Infrastructure Switchover

Well, the big event that I've been building to for the past few months is done. All that's left is sweeping up the dust.

Our previous primary/production/main site was in a colocation in central Ohio. It's not a bad facility, but it's geographically non-ideal (the company is recentering in NorthCentral NJ), and the infrastructure there isn't the best. Far better than we can provide, but it can't touch what we've got now.

We relocated to a Tier 3 to a Tier 4 [pdf link] datacenter. The new colocation features world-class infrastructure, from multifactor security to N+2 generators. It's hot.

I am now able to say that I'm not at all worried about the physical plant of the primary site. If there's a problem, we're probably going to cause it ourselves. This is both a relief and a curse ;-)

All I'm doing right now is going around the network making sure that things are running alright after the change. Host keys and things of that nature are all fine, since those were extensively tested prior to the switchover. The things I'm concerned about now are processes which weren't fully tested because they couldn't be, due to the architecture change.

This week is pretty much over Wednesday, thankfully. After that, I'm looking forward to a nice relaxing break where maybe I'll finally get to finish polishing my Simple Talk: Exchange

Interesting bug in fresh CentOS install (or why I'm glad I didn't pay for RHEL support on all my servers) from The Life of a Sysadmin

StAardvarktheCarpeted ran into a really interesting bug the other day, and wrote about it. Apparently on his CentOS 5.2 machines, users who were authenticated against an LDAP server couldn't pipe commands.

Right. 'ls' would work, 'grep' would work, but 'ls | grep' wouldn't work. The problem came down to a bug in the distributed nss_ldap software, and as StAardvark alludes to, the bugzilla discussion is well worth reading.

It's sort of interesting to note that the original bug was issued in May of this year, but an actual fixed package wasn't available until the end of July, even though the upstream software was repaired 5 days after the bug was submitted.

Even CentOS (the free version of RHEL) fixed the bug in June, while RedHat support-paying customers didn't get fixed unless they called support for help. The instructions that they were giving out weren't published until a couple of days before the updated package was released.

I've heard that RedHat support wasn't worth buying, but jeez. To actually punish users by making them wait longer for a fix than the free version is pretty bad. I'll stick with CentOS at this point.

Saturday, November 22, 2008

Imagine a beowu...wait, I mean RAID array of these...

So I'm sitting here browsing through Reddit and I find a list of interesting USB devices.

And on that list is this:

60 USB ports with flash drives in them.

My eyes light up! My God! 60 drives....I know 32GB flash drives are getting cheaper...32x60...almost 2TB of raw fast storage...

Of course, bringing me down somewhat was the fact that it costs $8,000...not to mention that it's a duplicator, not a hub. Alas, these dreams...

I still think that if you had critical information that only three or four people were allowed to have access to, it would be neat to setup software raid and encrypt the partition across one device for each person. When you're done accessing the information, everyone gets their drive back, and no one person can look at the data.

Friday, November 21, 2008

Configuring the Netgear SSL VPN Concentrators (SSL312/FVS336G) with Active Directory / LDAP

If you're a chronic Standalone Sysadmin reader, you might remember that a while back, I started implementing an Active Directory infrastructure with-which to centralize authentication for my Linux hosts. Well, getting the Linux machines authenticated has been completed, and now I'm on to further integration.

I've talked about my VPN issues before, and that I picked up some SSL VPN concentrators. An added feature of the ones I got are the ability to authenticate against Active Directory and/or LDAP. I figured it was time for me to put it to use.

Now, I'm no old hand at Active Directory...I've got the kind of knowledge that comes from reading several books but never touching it; in other words, the kind that leads to pain and gnashing of teeth. When I started creating my AD users, I just had employees, so I created users in the "Users" folder. Straightforward enough. We've got several classifications of users, and many users are in multiple groups, which in Windows is easy enough. I used a pretty much 1-to-1 Unix group to Windows security group (not Windows distribution groups, which is only for email (thank you SAM's ActiveDirectory Unleashed)). The group assignment was simple.

Then I thought about it, and remembered that we really have a couple of different bodies of users. For example, some of our clients have FTP accounts that they connect to in order to drop off or pick up files. There wasn't any sort of hierarchy under "Users", so I created two new security groups: the first, employees, which contains all of my employees, and clients, which contains all of our clients. I restricted the accounts in AD so that the clients could only login to the FTP servers, and I setup Likewise-Open so that only accounts in the "clients" group could FTP in, and only accounts in the "employees" group could connect to the rest of the machines. Theoretically, all of the other machines were inside the network, and behind the firewall so there's no chance a client would be logging in anyway, but there's no sense not being thorough.

All was well and good until I went to set up these VPN boxes. The only fields I could fill out were "authentication server", which was the local domain controller, and domain. Well, both answers were straightforward enough, except that if I set it up that way, all of the "client" users would be able to log in and get a VPN connection to the inside network. I tested it, and was right. Not a good thing.

I read a few help files and some documents on the devices, and found a suggestion for limiting group access. It suggested pointing to the OU (which is sort of like a folder in LDAP terms) that contained the appropriate users via LDAP authentication, rather than a direct "Active Directory" connection. Erm. Okay?

In an aside, I knew that Active Directory was essentially a gussied up LDAP server, but I didn't (and don't) know all that much about LDAP. I have a really big LDAP book that I've skimmed part of, but to say I have a mastery of it would be laughable. I know that "DN" is distinguished name, and "OU" is "organizational unit", and there's some sort of hierarchy for when you are building DNs. Or something like that.

So I read, and researched, and played, and installed the ldap tools package, and researched some more. And made liberal use of the "ldapsearch" command, and found this post which taught me how to query the Active Directory server from the command line. And it was good.

I read some more, and played some more, and came to the sad realization that I couldn't make my VPN boxes authenticate against the AD LDAP unless I modified it to create an OU to hold the accounts that I wanted to allow access to.

When you're faced with a problem that you know little to nothing about, and you want to test an idea that you suspect might work, but might also break the entire infrastructure you've spent the last few months of your life building, it's a good idea to get a second opinion.

That's why I gave my good friend Ryan a call (he's on hiatus from his blog while he assembles a datacenter from a box of erector sets), who knows far more about AD than I do, and explained the situation. I said that the manual suggested pointing to an OU, and that my research suggested that I might want to create another OU for the accounts to live in, but I was concerned that there was some sort of "Windows Magic" that would be broken if I just started to move accounts to this new "OU" all willy-nilly.

Ryan suggested making two OUs, one for "internal" accounts, and one for "external" accounts. Then, and when he said this, I smacked myself in the forehead, he suggested making "test" accounts in the Users folder, verifying that they worked, and then moving them, and seeing if they still worked. Ryan is a brilliant guy, and I owe him a few more beers now :-)

So I followed his suggestions, created the OUs, created the test user, it worked fine, tested transitioning my account, and it worked fine, and then I tested moving the client FTP accounts. They worked fine. I had created the OUs, moved accounts, and nothing broke. Glorious.

Time to get the VPN machines to authenticate. I created a new domain, using LDAP authentication, and it asked for the server address and the base DN. The server address was just the IP, and I had gotten good enough to know that the base DN was going to be "OU=Internal,DC=mydomain,DC=TLD". I saved it, opened another window and tried to log in with my domain credentials. And failed.

I thought about it, and remembered from doing the command line LDAP queries that my Distinguished Name (DN) actually started with "CN=Matt Simmons" rather than msimmons@mydomain.tld. On a hunch, I tried logging in with a username of "Matt Simmons" (without the quotes) and my domain password. Light shone from the heavens, choirs of angels sang, and I got the VPN portal.

That, my friends, was my experience Thursday. I've learned a lot, and I feel a lot more confident about LDAP and Active Directory. And I'm able to continue to centralize user administration. It feels pretty good.

I'm really interested in what other people are doing with Windows servers and Active Directory. Are there tips that you've picked up on the job and want to share? I'm really open to suggestions on what I've been working on too. I know so little that almost everything I hear is new information. It's an exciting phase for me.

Thursday, November 20, 2008

Default vsftpd on CentOS is dumb

This is pretty ironic, since the vs stands for "very secure".

From the top of /etc/vsftpd/vsftpd.conf:

# Allow anonymous FTP? (Beware - allowed by default if you comment this out).

This is the default configuration. Now, one of a couple things is going on here. Either the comment is lying, or the configuration flag is lying, or I'm terribly confused about what these words mean.

I figured that I'd check to see which was the case:

msimmons@newcastle:~$ ftp ftp1.test
Connected to ftp1.test.
220 (vsFTPd 2.0.5)
Name ( anonymous
331 Please specify the password.
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.

OKAY! This isn't good. In fact, it's a Bad Thing(tm).

Lets fix that. Ignoring the utterly stupid comment, I switch the flag to "NO", and restart the daemon. I try again, and I fail. Horray. Lets see what else I can find.

I log in as a pretend user, and I authenticate fine. I 'cd' to .. and do a directory listing, and what do I find, but all of the various client accounts. Our clients are confidential, which means that them seeing each other would be a Bad Thing(tm). I dig into the config again, and find this gem:

# You may specify an explicit list of local users to chroot() to their home
# directory. If chroot_local_user is YES, then this list becomes a list of
# users to NOT chroot().
# (default follows)

Great, so apparently, I just need to find and flip the flag on chroot_local_user. Of course, it doesn't exist in the file. I create it, set it to "YES", restart the daemon, and things are working the way they should.

The question in my mind is why an FTP daemon that bills itself as Very Secure comes with such an asinine configuration. There are occasions where chrooting the ftp users isn't called for, but there are relatively few occasions that require anonymous FTP access. I can't understand why they wouldn't have shipped a secure config and then made people unsecure it, as opposed to the way it is now. Really hard to believe.

I suppose it is possible that the distro maintainers are responsible, but it's still stupid.

Wednesday, November 19, 2008

Unix Rosetta Stone

I just found the Unix Rosetta Stone which seems simplified, but still probably handy if you've got a really heterogeneous network, or if an AIX machine should suddenly spring up in the middle of the server room.

Judging from the number of Delicious bookmarks it has, it's pretty well know, but I figured that I couldn't be the only person in the dark, and I figured someone might get some info from it.

Wacky SSH Authorized Keys Tricks

You may have caught my blog post last week about setting up host to host ssh keys.

What you might not have caught was in the comments, where Ben Cotton mentioned a trick I hadn't heard of, namely specifying the allowed remote commands in the authorized_keys line. He said there were even more features available, just waiting on the manpage. I replied that if he wrote it, I'd link to it.

Well, Ben put his money where his mouth is. He goes into nice detail and provides some good links and suggestions. This is really fascinating stuff, and I'm looking forward to using it in my own organization.

Therek over at Unix Sysadmin jumped in the fray, too. He's got three neat tricks for your ssh needs that you should really check out. I had no idea SSH key auth could be bent in these directions!

I've said it before, but I'll keep saying it. I love having visitors to my blog who enjoy what I write, and it really brings it home to interact with everyone like this. I couldn't ask for a better bunch of readers, though to be honest, I'm worried about Ben's longevity. I can't imagine what his cholesterol level must be ;-)

Ben, Therek, thank you both very much! I know my readers will really enjoy these articles. And as for everyone else, the same offer goes for you. If you've got something to share, let me know, I'll be happy to link to your blog entry or host it here if you've got the urge to write.

Tuesday, November 18, 2008

Great tool for network diagramming

I'm getting ready to implement a new Nagios monitoring system at our soon-to-be-production server, and I'm using Nagios v3 this time. Because I sort of figured out the configuration on my own last time, and it grew in a very organic (read: unplanned) way, the config is a mess. That is going to be different this time, thanks to Nagios 3 Enterprise Monitoring. It's not an intro guide to Nagios, that's for sure. The first chapter deals with what's new. The 2nd chapter deals with streamlining the configuration for large installations. It's been very educational in teaching me how various hostgroups and service groups can work together to really make life easier for configuring monitoring.

After reading this book pretty much cover to cover, I decided that I needed to logically map out the various relationships of my services, to figure out the inheritance policies (Nagios supports multiple inheritance in configuration objects).

I started looking for a good free diagramming tool, first on Windows then on Linux. Windows was hopeless. I found lots that looked promising, but ended up being shareware. I don't have MS Office Pro on my personal laptop, so I didn't have Visio handy, and I wasn't going to buy a piece of software when I was sure that something good and free existed.

Giving up, I booted into Linux to see if anything I didn't know about was in synaptic. Of course not. The best diagramming solution in Linux is Dia, and I'm sorry to say it, but it's ugly. Really ugly. I'll use it if that's the only thing available and I'm just looking for something quick, but I won't like it.

I kept looking, and finally out of desperation I did a search for online applications, and I hit the jackpot. I found Gliffy. It's a flash diagramming application with built in stencils for all sorts of things, and the ability to add your own clipart. It'll even export to Visio.

I was impressed. It's free for personal use up to 5 public diagrams. You can pay $5/month for unlimited drawings and removing the ads, and there are corporate versions that have built in collaboration. It's easy to use, and it helped me a lot. Here's a drawing of some of my nagios groups:

If you're in the market for a cross-platform diagramming solution, you could do a lot worse than Gliffy.

Monday, November 17, 2008

Building and designing systems: Is the cart pulling the horse?

There's a really interesting post over on Code Monkeyism about test driven development of code, and how it's related to the design of the space shuttle engines.

The short of it is that opposed to typical complex engine designs, where each individual part was tested independently and then together in subassemblies, and then again when the unit was complete, the space shuttle was pretty much designed, assembled, then tested. The better method has the advantage of weeding out all the really bad decisions in the small scale, then when you get to the point that you put them together, it generally works rather than flying apart at high speed.

While Code Monkeyism is primarily centered on software development, the points that Stephan make are readily applicable to us as infrastructure engineers, particularly in a growth phase where we're engineering new solutions and trying to implement them.

I'm as guilty of putting the cart in front of the horse as anyone. My debacle with the cluster was a prime example. When you're given a job to do, the equipment to do it with, and no time to learn, these kinds of things happen. Particularly when you're working with shoddy tools anyway.

I shouldn't have attempted to have the very first cluster I created be a production system, first. More due diligence in researching solutions was called for, and I probably would have learned beforehand that RHCS wasn't ready for prime time. I have learned from the experience, though, so all is not lost. Using the knowledge and experience I've gained, the next time will be more solid.

Is this something that everyone has to learn on the job, or was there a class or memo that I didn't get?

Friday, November 14, 2008

Host to host security with SSH Keys

I have a lot of Linux hosts. Somewhere in the vicinity of 70 servers, all but 3 run some variant of Linux. Lots of them have to communicate seamlessly through cronjobs and monitoring daemons. To pull this off, I've implemented SSH key authentication between the applicable accounts. The method is pretty easy.

Check the ~/.ssh directory for the user you want to ssh as. There's probably a "known_hosts" file, which keeps track of the machines that user has contacted previously, and there's probably an id_dsa and These are the private and public keys of the user, respectively. You might instead see similar files, but with "rsa" instead of "dsa". These are keys that have been created with another encryption method. See more information here.

We have the keys now, so what we want to do is make the remote machine aware of them, so that our account on the source machine which has the private key can connect without authenticating with a password. To do this, we install the public key (the in the ~/.ssh/authorized_keys of the remote account we want to connect to, on the remote host. So, we have

Machine A:
User: msimmons

Machine B:
User: msimmons

machineA$ cat ~/.ssh/
[text output]

machineB$ vi ~/.ssh/authorized_keys
[insert text output from machineA]

Ensure that the permissions on the authorized_keys file are not world-writable or the ssh daemon will probably refuse to connect. It should also be noted that your sshd config (probably in /etc/ssh/sshd_config) should be setup to allow key based authentication. The manpage should help you there.

At this point, you should be able to connect from one account to the other without a password. This allows you to use rsync to transfer things automatically, through the cron. It would look a bit like this:

machineA$ rsync -e ssh -av /home/msimmons/myDirectory/ msimmons@machineB:/home/msimmons/myDirectory/

Read the manpage for (many) more rsync options.

There is a weakness to this method, though. Anyone that obtains a copy of the private key (the one in machineA called id_dsa) can pretend to be you, and authenticate as you to machineB (or any other machine that has your public key listed in the authorized_keys). This is potentially a very bad thing. Particularly if you have your private key on your laptop, and the laptop gets stolen. You wouldn't want a thief to get their hands on your private key and compromise the rest of your network. So how to get around not needing a password, but not wanting someone just to be able to use your private key if they get a copy. The answer is to use a pass phrase on your private key.

Through proper use of the ssh-agent and ssh-add commands, you can set up passwordless communication from one machine to another. I could explain the common usage of these, but it would just be duplicating this fine effort from Brian Hatch: SSH and ssh-agent. He talks about setting up ssh-agent and ssh-add, but if you're like me, you've already got existing SSH keys laying around without passphrases. The answer to that is to simply run ssh-keygen -f [keyfile] -p to reset it.

Now that you've got a working secure key and a way of not having to type your passphrase every time, lets figure out how to get your servers to take advantage of the same technique. At the very least, you're going to have to type the user's passphrase once, either the first time you want to connect, or (more likely) when the machine boots up. That is not to say that you'll require a password to boot the server, just that before your cron jobs run, you'll need to start the ssh-agent.

Once you start the ssh agent on the remote machine and add the key (per the instructions above), how do we keep that information static? Well, remember those variables that ssh-agent setup that tell 'ssh' the socket and PID to talk to the agent with? It turns out that you can put those (and any other variables you need to be static and universal) in the crontab at the top:

msimmons@newcastle:~$ crontab -l
48 10 * * * ssh root@testserver.mydomain uptime > ~/uptime.txt 2>&1

This will allow any of the scripts being called by the cron daemon to access the variables SSH_AUTH_SOCK and SSH_AGENT_PID, which in turn allows your scripts to ssh without using the passphrase. All that is required is updating the crontab when you reboot the machine and/or restart the agent.

On my desktop, since I ssh a lot, I add the same variables to my .profile in my home directory so that I only need to type in the passphrase once. If you find yourself connecting to other machines frequently from the server, you might want to do the same thing.

I'm sure I messed up the explanation in some parts, so if you have any questions, please don't be afraid to ask in the comments. I hope this helps someone set up their key-based authentication in a more secure manner.

See the followup to this article!

Datacenter that could belong to S.P.E.C.T.R.E.

This datacenter is dripping with "evil lair" vibes.

The Worlds Most Super-Designed Data Center

Tips for an initial buildout

St Aardvark the Carpeted (best. name. EVER.) has been working on building out a new data site for one of the companies he works for. He's got some great tips on things to remember and take into account before you do one of these yourself.

I know the installers that built my most recent rack at the colocation really appreciated the
that I made to show what was going where in the rack. I also prepared spreadsheets listing all the cables and where they went. The colocation also needed all serial numbers to all equipment that I was bringing in, which is good for me to have anyway and is probably a good practice to have on hand.

Has anyone else got any tips for a one-time build out that would help?

Tuesday, November 11, 2008

Sysadmin Extorts company for better severance package

I just read this over at TaoSecurity. Apparently a recently laid-off sysadmin was arrested because he threatened to bring down the IT infrastructure if his severance package wasn't improved.

This isn't the first disgruntled sysadmin story we've seen this year. Please, I beg of you, spare the servers in your rampage. ;-)

Where to put your system monitoring

I'm getting ready to implement my new Nagios monitoring system, and I've been researching best practices.

My current setup is that I have 3 “data sites”, which I consider to be physical locations where servers are kept. The primary site, the backup site, and the soon-to-be-primary site. When the new site becomes primary, the current primary will become backup, and the backup site will go away. Here's how they're setup:

They are geographically diverse. and as you can see, there is limited bandwidth between them.

Nagios is currently set up at the Backup site, and has remained unchanged for the most part since the backup site was the primary (and only) data site. This is not ideal, for a bunch of reasons.

Because of the way Nagios queries things, it is at the mercy of the networking devices between it and the target. If the router in-between goes down, then Nagios sees everything beyond that router as down. You can alleviate the most annoying side effect (dozens or hundreds of alerts) by assigning things beyond the router to be "children" of the router, in which case Nagios will only let you know that the parent is unavailable.

Aside from not having status checking on entire segments of our network in the event of an outage, what if the segment with no network access hosts your mail server? I've had this happen before, and it's disturbing to suddenly receive 2 hours worth of 'down' notifications at 3am. Not a good thing.

To circumvent this type of behavior, I'm going to be employing one nagios at each location:

In the event that one of my sites loses network access, I've still got another host to send messages.

If you monitor, how do you guys arrange your monitoring? If you don't, any plans to start?

Monday, November 10, 2008

Encrypted Filesystems out of the box on CentOS

Like many people that have multiple locations, I sometimes have to get in my car and sneaker-net a hard drive to another facility. Sometimes I ship them via FedEx. In any event, whenever I take a hard drive out of my business, I run the risk of becoming another statistic. These days, it seems that a month doesn't pass where some high profile data has been breached. It happens frequently enough that there's a blog devoted to it.

Anyway, I've been looking for ways to encrypt the drives I transport. It looks like the "best" way is to use TrueCrypt for encrypting the entire device. It's cross platform (Windows, MacOS, and Linux) and has a great interface and is pretty easy to script.

My problem is that it is a comparative pain in the butt to get running on my platform of choice (CentOS/RHEL5). If you look, the only supported Linux versions are Ubuntu and SLES. Yes, I can compile from the source, and I have to test things, but I don't want to have to manually recompile things on production servers. I suppose I could compile it once and package an RPM if I had the time and knowledge (and the time to acquire the knowledge). Instead, I decided that it wasn't the solution for me, unless it was the only solution available. So I kept searching.

Today I chanced upon what I think is a great solution. Using the dm-crypt software along with built in loop devices, it's possible to encrypt a device without using any non-native software.

In the (hopefully) unlikely event that the link I pointed to goes away, here is the (much abridged) process:

If you're using a file, rather than a device (to have an encrypted volume sitting on an otherwise unencrypted filesystem), create the file, here using 'dd':

dd of=/path/to/secretfs bs=1G count=0 seek=8

Setup the loop to point to your file/device:
losetup /dev/loop0 /path/to/secretfs

Create the encrypted volume with cryptsetup:
cryptsetup -y create secretfs /dev/loop0

Create the filesystem on the device:
mkfs.ext3 /dev/mapper/secretfs

Mount the encrypted filesystem:
mount /dev/mapper/secretfs /mnt/cryptofs/secretfs

And now you have access.

To remove the filesyste, perform the last few steps in reverse:
umount /mnt/cryptofs/secretfs
cryptsetup remove secretfs
losetup -d /dev/loop0

Whenever you want to remount the device, just follow all the steps above that don't use dd or create filesystems.

There you go, an easy way to have encrypted volumes on your CentOS/RHEL machines.

Saturday, November 8, 2008

Best. Bug. Ever.

The other day we talked about mobile devices for administrators. Today I read about a particularly amazing bug on the Google Android platform that you might be interested seeing.

According to that article, certain firmwares (1.0 TC4-RC29 and below) spawn a hidden terminal window that accepts all keystrokes as commands. *ALL* keystrokes.

The person who discovered the bug was in the middle of explaining to his girlfriend why he had to reboot his phone when his phone rebooted again. Because he typed the word reboot. Good thing he wasn't explaining the various uses of the 'rm' command.

Now THAT is a bug. By the way, the best workaround (aside from updating the firmware) is to edit init.rc and take out the lines at the end that spawn the shell.

Friday, November 7, 2008

Justify the extistance of IT (from Slashdot)

If you've ever wondered how to value your time, or justify your hours, there's a lot of input going on at Slashdot.

I've been lucky to escape this particular issue, but I know there are a lot of people who have to constantly fight for what little resources they're given. Maybe this discussion can help someone out there.

The Creative Admin

Curious. Intelligent. Technical. Detail oriented. Stubborn.

Are these words that describe you, or do they describe the traits required for the position you hold? Is there a difference, after a while?

What if, in addition to those, you had other traits?

Imaginative. Creative. Inventive.

Are these traits (and others like them) that you think would be useful in your job? Would being inspired by your creative side yield positive or negative results? As Tom Limoncelli asked the other day, do you really want your electricians getting creative?

Does your position inspire, require, or even permit you to be creative? It might not, but I know that it can. Art isn't just something that lives in a museum, or hangs on a wall. Art is sometimes intrinsic to science (great example), but art can also happen when science is transcended.

Your work can be your creative outlet. There are flickr groups full of examples. Don't get burned out and lose your will to innovate. It might seem like administration is work-by-rote sometimes, but don't lose sight of the bigger picture, and stay creative.

Thursday, November 6, 2008

WPA TKIP Cracked

Well, hell.

I caught on Slashdot today that WPA using TKIP has been compromised. At the moment, only communication from the router to the host is vulnerable, but I can't imagine that it will stay that way for long.

I'm really considering moving my wireless APs to the external network, as opposed to the internal access they have now. That would require anyone on wireless to use a VPN, which has superior encryption anyway, I believe.

Any thoughts?

Wednesday, November 5, 2008

Stupid Unix Tricks (from Slashdot)

I figure everyone that reads this blog knows about Slashdot, but in case you missed it, here's an entire thread of people contributing their Stupid but Useful Unix Tricks. I figure there's enough of Unix out there that we could all learn some more.

The balance of security and usability

You read it everywhere, from all the security analysts. Security is a process, not a goal. As the implementers and administrators of the control mechanisms, we need to be especially cognizant of that concept.

If you're anything like me, you tend to work on things in waves, or spurts. I'll go for a while, concentrating on one thing for as long as it takes to achieve my goal, then move to the next (probably unrelated) task. When it comes to improving the security of a particular segment of the infrastructure, if we tarry too long in one spot, though, we run the risk of becoming a bit too fervorous in our decisions and wind up becoming draconian.

Rather than becoming like Mordac, we need to view ourselves as enablers of technology. There is a balance to be struck, and that's the hard part. The line is sometimes fuzzy between information security and infrastructure usability. Where you draw will depend on the importance of the data you are protecting, and the organization you're a part of.

Where do you draw that line in your organization? Do you get to decide, or are you at the mercy of policy makers who ignore an entire side of the equation?

Tuesday, November 4, 2008

Scalability in the face of the incoming onslaught

As you may know, today is election day in the United States. If you're a US citizen and registered to vote, go do it.

Because this is a big election, lots of political sites are going to feel the squeeze. To counter this, many of them are beefing up their facilities ahead of time. High Scalability is taking a look at techniques to improve response.

If you're not familar with High Scalability (the site), you should check it out. They frequently link to very interesting studies in reliability and to techniques that very large sites (google, facebook, etc) use to manage and balance load.

Sysadmin Mobile Devices

Manage any network of sufficient complexity, and eventually you'll want to be alerted to when something breaks. I've mentioned this in general, but not all devices are created equal. What should you look for?

In my devices, I need a full qwerty keyboard. I really do, even if I'm only replying to email. I've seen people texting with a number pad, but my brain is hardwired now to querty. Of course, if you were hardwired to Dvorak like some people are, you might feel differently.

By far, the most important service my phone provides me is email. Since we don't have a Blackberry Enterprise Server, I have a rule on my corporate mail that forwards email to my blackberry. It's actually a combination of rules, crafted to get it to work the way I want. Since I subscribe to all manner of lists and newsletters, those things get sent out around the clock. I don't want to be woken up at 3am because someone on the Likewise Open list can't authenticate their AIX machine. For this reason, I have my mail rules setup to forward everything (excluding some high-traffic lists) to the blackberry, and then from 10pm till 8am, only emails originating from our externally-facing domain are forwarded. Since all of my internal cron job notifications are sent from the imaginary domain we use for internal resolution, they don't get forwarded. I have specifically set up Nagios to send emails from an external-domain account, so I get them all the time. This ensures that my bosses can get a hold of me, and that I'm aware of any critical weirdness happening at any hour of the day or night.

Also important to me is an SSH client. I don't make full use of mine yet, for reasons I'll explain, but I can administer my firewalls from outside with my phone. I have heard, and someone please correct me if I'm wrong, that if your corporation has a Blackbery Enterprise Server, you can use that connection to reach internal hosts. I don't know that I'm going to be running my own BBES anytime soon, but that's a strong argument for it. There appear to be lots of remote desktop solutions available too.

All in all, my blackberry provides me with sufficient access to resources. I wish there was a VPN solution for it that I was convinced would work with my Netscreen solution , but I suppose you can't have everything.

Of course, I'm not suggesting that the blackberry is the bees knees, as it were. I'm sure there are better solutions out there. I'd like to think the iPhone would be amazing, but I don't know how typing commands on the keyboard would go. I doubt the auto-correct on spelling would like some of the unix commands I'd be typing.

What do you use for a mobile device? Can you do any remote administration through it, or is it just for communication, and you fall back to your laptop in emergencies?