Friday, May 29, 2009

Outstanding new sysadmin resource

I always like posting cool sites and resources that I find, and man, today's is no exception.

I'm willing to bet a bunch of you have already heard of it and are probably participating. It just got out of beta the other day, and it's live for new members. It's called Server Fault, created by the same people who did Stack Overflow.

The general idea is that a sysadmin asks a question to the group. People answer the question in the thread, and the question (and answers) get voted up and down. Think of it like Reddit with a signal to noise ratio of infinity.

I've been active on there, checking it out, and while it's frustrating in the beginning (you can't do anything but ask and answer questions, and in your answers you can't even include links. As your questions and answers get voted up, you receive "reputation", and as your reputation improves, you get more abilities on the site. Like an RPG or something, I guess. Check the FAQ for more details.

The absolute best part is that you can learn more and more and more, all the time. I can't tell you how many questions I've seen where I thought "I've always sort of wondered that, too", and I just never took the time to research it. *click*

It's outstanding, and as a technical resource, probably unparalleled in the sysadmin world. Let me know what you think of it (and post a link to your account, if you'd like. Mine is here).

Thursday, May 28, 2009

Software Deployment in Windows, courtesy of Admin Arsenal

One of the blogs that I read frequently is Admin Arsenal. To be honest, they're really the only commercial/corporate blog that I follow, because it's about many aspects of Windows administration, and doesn't just focus on their product.

Shawn Anderson, one of the guys who works there (and a frequent reader here on Standalone Sysadmin), took me up on my offer of hosting guest blog spots, and asked if I would host something written by the Admin Arsenal staff. I agreed, under the condition that the entry wasn't a commercial disguised as a blog entry. Of course their product is mentioned in this entry, but I don't feel that it is over the top or out of place.

The topic we discussed was remote software installation on Windows, something that has always seemed like black magic to me, someone who has no Windows background, and I figured it would be something that many of you would be interested in as well.

In the interest of full disclosure, I should say that I am not getting paid or reimbursed in any way for this blog entry. If you feel that allowing companies (even companies with blogs that I enjoy) submit guest blogs, say so in the comments. In the end, this is ultimately my blog, but I'm not so stubborn as to not listen to wise counsel.

Let me just reiterate here that anyone who has a topic of interest and wants to do a guest blog is welcome to drop me a line. The chances are great that I'll be very happy to host your work, and that many people would love to read it.

Without further ado, here's the guest entry from Admin Arsenal!





Photo by gotplaid?
Photo by "gotplaid?"


Ren McCormack says that Ecclesiastes assures us that "there is a time for every purpose under heaven. A time to laugh (upgrading to Windows 7), a time to weep (working with Vista), a time to mourn (saying goodbye to XP) and there is a time to dance." If you haven't seen Footloose then you have a homework assignment. Go rent it. Now.


OK, 80's movie nostalgia aside, let's talk about the "dance". Deployment. Almost every system admin knows the pain of having to deploy software to dozens or hundreds or even thousands of systems. Purchasing deployment tools can get very pricey and learning how to use the new tools can be overwhelming especially if you are new to the world of Software Deployment. Here are a few tips to help you in your Software Deployment needs.

Group Policy

Deploying software via Group Policy is relatively easy and has some serious benefits. If you have software that needs to "follow" a user then Group Policy is the way to go. As particular users move from computer to computer you can be certain that the individual software needed is automatically installed when the user logs on. A downside to this approach is that any application you wish to install via Group Policy really needs to be in the form of a Windows Installer package (MSI, MSP, MSU, etc). You can still deploy non-Windows Installer applications but you need to create a ZAP file and you lose most of the functionality (such as having the software follow a user). It's also difficult to get that quick installation performed and verified. Generally speaking, you're going to wait a little while for your deployment to complete.

SMS / SCCM

If you are a licensed user of SMS / SCCM then you get the excellent SMS Installer application. SMS Installer is basically version 5 of the old Wise Installer. With SMS Installer you can create custom packages or combine multiple applications into one deployment. You can take a "before" snapshot of your computer, install an application, customize that application and then take an "after" snapshot. The changes that comprise the application are detected and the necessary files, registry modifications, INI changes etc are "packaged" up into a nice EXE. Using this method you ultimately have excellent control over how applications are installed. A key strength to using the SMS Installer is found when you need to deploy software that does not offer "Silent" or "Quiet" installations.

A downside to using SMS is the cost and complexity. Site servers. Distribution Point servers. Advertisement creation... it's a whole production.

Admin Arsenal

Admin Arsenal provides a quick and easy way to deploy software. Once you provide the installation media and the appropriate command line arguments the deployment is ready to begin. The strength is the ease and speed of deployment. No extra servers are needed. No need to repackage existing installations. A downside to Admin Arsenal is that it if the application you want to deploy does not have the ability to run in silent or quiet mode (this limitation is occasionally found in freeware or shareware) then you need to take a few extra steps to deploy.

Most applications now-a-days allow for silent or quiet installations. If your deployment file ends in .MSI, .MSU or .MSP then you know the silent option is available. Most files that end in .EXE allow for a silent installation.

Refer to Adam's excellent blog entry called the 5 Commandments of Software Deployment.

Disclaimer: I currently work for Admin Arsenal, so my objectivity can and should be taken into consideration. There are many solutions commercially available for deploying software. Take a dip in the pool. Find what works for you. If the software has a trial period, put it to the test. There are solutions for just about every need and budget. Feel free to shoot questions to me about your needs or current deployment headaches.

Wednesday, May 27, 2009

Followup to slow SAN speed

I mentioned this morning that I was having slow SAN performance. I said that I'd post an update if I figured out what was wrong, and I did.


EMC AX4-5's with a single processor have a known issue, apparently. Since there's only one storage controller on the unit, it intentionally disables write cache. Oops. I didn't even have to troubleshoot with the engineer. It was pretty much "Do you only have one processor in this?" "yes" "Oh, well, there's your problem right there".

So yea, if you're thinking about an EMC AX4-5, make sure to pony up the extra cash...err..cache..err..whatever.

SAN performance issues with the new install

So, you would think that 12 spindles would be...you know...fast.

Even if you took 12 spindles, made them into a RAID 10 array, they'd still be fast.

That's what I'd think, anyway. It turns out, my new EMC AX4 is having a bit of a problem. None of the machines hooked to it can reach > 30MB/s (which is 240Mb/s in bandwidth-talk). I haven't been able to determine the bottleneck yet, either.

I've discounted the switch. The equipment is nearly identical to that in the existing primary site (using SATA in the backup rather than SAS, but still, 30MB?). It isn't the cables. Port speeds read correctly (4GB from the storage array, 2GB from the servers).

The main difference is that the storage unit only has one processor, but I can't bring myself to believe that one processor can't push/pull over 30MB/s. I originally had it arranged as RAID 6, and I thought that maybe the two checksum computations were too much for it, but now with RAID 10, I'm seeing the same speed, so I don't think the processor is the bottleneck.

I'm just plain confused. This morning I'm going to be calling EMC. I've got a case open with their tech support, so hopefully I'll be able to get to the bottom of it. If it's something general, I'll make sure to write about it. If it's the lack of a 2nd storage processor, I'll make sure to complain about it. Either way, fun.

Tuesday, May 26, 2009

Monitor Dell Warranty

A post over at Everything Sysadmin pointed me to an excellent Nagios plugin: Monitor Dell Warranty Expiration. Great idea!

That was a well needed break

As of last Wednesday, I had gone 31 days with only one day off in the preceding month. I was burned out. Fortunately, this coincided with my friends coming to visit my wife and I from Ohio, so this gave me an excellent opportunity to deman^H^H^H^H^Hrequest time off, and they gave it to me.

For the past 5 days or so, I've been off and running around the NY/NJ area, doing all sorts of touristy things with our friends, and just exploring NYC like I haven't had the chance to before. We spent an entire day at the American Museum of Natural History, and I finally got to go to the Hayden Planetarium to see Cosmic Collisions. It was absolutely worth it, and I'm so glad I picked up a membership on my first visit.

I finally got to go visit Liberty Island and see the statue. It was great, very impressive. After July 4th, they're going to be opening up the crown, but until then, you can only go to the top of the pedestal. It was still a great view.

Ate lots of great food, had a good time with my friends, and today I'm back at work. We're switching over to a new backup site this week. Hopefully things will go smoother than they have the week prior to my vacation, but we'll see, and I'll write about it.

So did I miss anything good?

Friday, May 15, 2009

Ouch.

More on the security front, flight simulator site Avsim had its entire datastore wiped out by a cracker.

That reminds me, I've got to change my tapes.

Security is a process and not plug&play

I got a SANS pamphlet in the mail today, which makes me feel guilty. Not really guilty, as in "I should go but I'm not" (even though I should, and I'm not), but because in terms of IT security, I've sort of been in the "Oh, I'm sure that'll be fine while I'm doing all of this other stuff" mode. It's not a good practice to be in, but I don't see any way to give IT security the attention it deserves when all (and I mean all) of my free time is spent building new infrastructure and stopping the existing infrastructure from falling apart. And if you don't believe me,

msimmons@newcastle:~$ ps aux | grep Eterm | wc -l
21

That's not counting the VMs that are installing right now, or the VM diagram I'm using to keep track of which physical machine will be getting what virtual machine.

I cringe whenever I think about this phrase, but I don't have enough time to worry about security. The automatic response to that (even from/to myself) is "do you have enough time to clean up a break in?". I'm not monitoring logs like I want, and I don't even have enough time to set up a log monitoring system to do it for me. I'm hoping that in a few weeks things will relax and I can start putting emphasis where it should be, but it isn't right now. I really need more staff to give proper types of attention to security, various Oracle, Postgres, and MySQL databases, site buildouts, asset management, user support, and backups, but I don't have it, so I find myself juggling all of those various tasks, and my stress level is directly related to how many balls are in the air at one time.

Looking through the SANS booklet, I see all kinds of classes that I'd love to take (the Network PenTest / Ethical Hacking class, for one) but I can't even foresee enough free time to take the class, let alone utilize it.

Have any of you ever been to a SANS conference and received training? Was it worth it? How did you get to use it back at your job? Cheer me up and regale me with stories of success from conference training ;-)

Thursday, May 14, 2009

Happy 1st Blogiversary!

I'm very, very happy to announce that today is Standalone Sysadmin's first Blogiversary! That's right, last year at this time, I posted my Introduction and Welcome. Little did I know that less than a year later, this blog would have over 500 subscribers, have hit the Slashdot front page, and gained a loyal following of the best readers a blogger could ever ask for.

Thank you all so much. It wouldn't be possible, or even worth doing, if people out there didn't visit, read, and write comments, emails, and twitter back. You all rock.

So since it's a celebration of sorts today, let us not go without presents!

A while back (ok, like 6 months ago), I created a survey that I called the 2008 IT Admin Job (dis)Satisfaction Survey. Over the course of two weeks, 334 of you took that survey to let other people know what it was like to be you. It was an amazing amount of information. Even as the results were still coming in, I could tell that there was a lot of pent up frustration. I would have liked to have compiled the results before now, but the amount of information was so massive, and I have no experience with anything like this, that I didn't know how to approach it.

I believe I've finally got a handle on a roughly usable format. I've created an overview of the survey (.doc format) with comments from me, and because I firmly believe in the open sharing of information, I am providing a raw CSV file of the responses (with identifying information strippped) so that you can go do crazy things with the data and come up with new interesting combinations on your own. Knock yourselves out, but if you find something interesting, make sure to drop me a line, and if you publish your own findings, just point a link back to this site, if you would. (Many thanks to NoNeck for hosting these files!)

In a slightly different twist, I don't know if you've heard, but the Amazon Kindle store has been promising to make blogs available via subscription for a while, and I signed up for information a while back, and just got it today.

The short story is that you can now subscribe to Standalone Sysadmin on your Kindle. The price is currently set at $2/month, not because I particularly think that anyone will pay that (or even that it's worth that) but I haven't figured out how to lower the price yet :-)

In the meantime, if you have a kindle and you absolutely MUST read my updates, feel free to subscribe, I get something like 30% of the cost. I won't be insulted if you pass, and I'll update whenever I figure out how to lower the price.

So that's that. Year 1 in the bag, with hopefully many more to come. Thanks again everyone, I can't tell you how much I appreciate each and every one of you who comes here. Thank you and take care.

--Matt

Saturday, May 9, 2009

Obtaining the WWPN of fibre channel HBAs

Last week, I installed all of the equipment in our beta site. Due to issues with the power at my office, I wasn't able to touch the blades for the month or so before they went in the rack, and the last time they were powered on, I didn't have the storage array to make sure that they could talk. Now I'm in the position of needing to match up which connected HBAs go to which machines. 

Researching, I found this entry on a blog called "sysdigg". Sysdigg looks like an interesting blog until you go to the current site, where it's mostly spam and ads. I'm not sure what happened, but at least back in 2007 it looked informative. 

Anyway, the key lies in having a modern kernel and sysfs. There's a class called fc_host with the appropriate info: 

> cat /sys/class/fc_host/host*/port_name

This is more documentation for myself, since I always forget, but maybe it will help someone else too. 

Friday, May 8, 2009

Quick blurb: Nagios is forked

Jack Hughes over at The Tech Teapot posted today that Nagios has been forked. Apparently the lead developer (and sole person with commit access) has been very busy and not committing updates. This caused some self-described protagonists to launch "Icinga".

It's going to be very interesting to see where this goes. I personally hope that Nagios itself picks back up. My Nagios 3 installation is great, and I love it. I've had nothing but good things to say about it. If it doesn't pick back up, I have to wonder how many people will just move to OpenNMS.

Thursday, May 7, 2009

NVidia's Tesla C1060 - The FM principle?

I've been hearing a lot about NVidia's new "super computer on your desk", but all I can really find is marketing talk. I was skeptical of the claims until I went to the site and looked at the specs. They really do look impressive. 240 cores, each running at almost 2ghz with 4GB of dedicated memory. That's a lot of hardware, especially when you see some "super computers" on sale that have four of them in one machine. 

There's just one thing that I can't figure out...how does it work? According to what little documentation I can find, the system processor offloads work onto the GPUs, but I'm a little fuzzy on how that works. Since there are drivers, maybe they offload intensive work onto the card? I'm not sure, honestly. 

Does anyone have any experience with this? We do some large math here that might be handy to offload to a card if it works well enough. A price tag of $1300 is a bit much for me to experiment with at the moment, though. 

Wednesday, May 6, 2009

Learning

I've written a little bit about this before, but I wanted to expound on it. Since I'm at the new beta site today, I thought I'd run this instead of the normal blog entry.



Whether or not you endorse the tenets of natural selection, you must admit one thing: human beings are constantly evolving, and not just on a species level. I mean you. Me. We are evolving. We’re changing – different today than yesterday, and the day before. We have new knowledge and experiences. Inherently, we are not the same. Hopefully we have become better, more able to face the challenges of today and tomorrow, and by expanding ourselves, we facilitate further alterations to our beings.

How and why does this happen? Part of it is accidental. New experiences build neural connections in our brains, and we grow. We eventually stop hitting the same pothole on the way to work, because we learn to avoid it. This way we can discover exciting new potholes.

Often the process is intentional. Reading books and manuals, attending classes, building test beds, and implementing new technology forces our brains into overdrive, maps new pathways, and increases the speed by which we will learn in the future.
As IT administrators, we deal with a rapidly changing world that demands our constant improvement. Blink your eyes and things increase by an order of magnitude. Technologies that were up-and-coming fade into obscurity and too often we’re responsible for managing every step in that lifecycle. How do we keep up with it?
It’s easy to give up - to not try. It is almost tempting, actually, to stick with what you know rather than to discover, investigate, and implement newer (and possibly better) technologies, but I urge you to reconsider that option. The less flexible you are, the harder it will be when some inevitable change occurs and leaves you standing in the dust.

In this column, we’re going to examine the types of learning resources that people use in order to improve themselves and their minds. These resources are at your disposal as well. Many of them are inside you as you read this, waiting to be unleashed on unsuspecting information throughout the world. I ask only that you ignore the complacency which gnaws at your soul, holding you back while others move ahead.

Traditional Learning
Throughout thousands of years of human history, advancing in knowledge came from scholarship under an already learned master. Modern society has extended this concept into mass production. Where a master once had a few pupils in apprenticeship, teachers today face an onslaught of students vying for time and attention. Nevertheless, class work can be invaluable in acquiring knowledge, depending on the class (and the professor).

If you are young enough that you are still in school full time, my advice is that you choose your classes wisely. Research the class by interviewing past students of the curriculum and the teacher, and make sure that you understand what the ex-pupil’s goals were for the class. Remember that a review could have been made through the vanilla-tinted lenses of “good enough” by someone who merely wanted to complete class, as opposed to someone who was truly seeking knowledge. Interview people who share your goal of self-improvement.

If you are seeking continuing education classes, you may have an overwhelming number of choices, depending on the subject matter. Not being on any specific campus, finding people to interview may be more difficult, but it is by no means impossible. With the explosion of blogs, chances are excellent that someone on the internet has taken the class. Utilize your favorite search engine to find an ex-student and ask them what they thought. If you aren’t able to find past students, email any address you can find at the institution and attempt to get in touch with your would-be professor. There should be no issue discussing curriculum with you.

Specific web sites are available for reviewing online classes, such as http://www.trainingreviews.com/. There may be a chance that your class has already been reviewed. If your goal is to become certified, you might check the certification homepage on about.com: http://certification.about.com/. Due diligence can save you money and time.

I speak from experience when I recommend that you research the curriculum of a class. A few years ago, I attended an database course which served as an introduction to Oracle 10g. If I had examined the syllabus further than I did, I would have realized that the innstructor assumed a pre-existing knowledge of Oracle, which put me somewhat at a disadvantage, having none. On the other hand, I had a very positive experience while enrolled in the Cisco University for several semesters. It was very well recommended by several of my associates, and jumpstarted my experience by introducing me to several pieces of equipment I had theretofore not touched. Do your research and don’t waste your (or your company’s) money.
Exceptional Learning
As you probably realize, attending class is not the only way to acquire knowledge, even if it is the most traditional. Training comes in many shapes and sizes, much of it deliverable through the postal service or email. The training that costs you thousands of dollars can be reduced to hundreds (or less) by purchasing only the books which normally accompany the training class. This structured-but-open-ended method has been used by many people to pass certification exams, but I have qualms about it. My opinion is that the main benefit of the class is the experience contained within the instructor, and by robbing yourself of the student / teacher relationship, something intangible is lost.
I do not want to make it sound as if structured class learning is the only way, or even the best way. It is “a” way. Just as people learn differently, there are many different ways to learn. We’ve looked at instructor led and structured self-study, but there are more.

If you’re unfamiliar with the term “autodidact”, you’re not alone. An autodidact is an individual who takes the initiative to teach themselves, rather than go through the formal process of education and studying under a professor. Autodidactism, as it is known, has a long history and includes such luminaries as Socrates, Benjamin Franklin, and Thomas Edison. Even Samuel Clemens once famously wrote as Mark Twain, “I have never let my schooling interfere with my education”. Indeed.

To some extent, I think many of us have tendencies such as these. We all learn things by doing and exploring on our own, but through my observations, I have obtained the belief that IT professionals have stronger tendencies than most in this regard. There are always exceptions, but we do generally seek out and explore new things. We tend to be xenophiles by nature, opening ourselves to new experiences and new ideas. When you combine this with the urge to plumb the depths of a subject, you get an autodidact.
If you have ever learned about a new subject, then absorbed that subject top to bottom in order to “own” it, to make it part of you, then you have the makings of an autodidact. If you haven’t, it is not too late to begin now. A great place to start is a subject that you’ve always been curious about but never gotten around to researching. Begin on the internet. Go to the library. Use magazines, books, research papers, and encyclopedias to make that subject your own. Truly grasp that subject, and revel in it.

Absorbing reams of information will absolutely grow new neural pathways, but in order to get neural superhighways, you’ve got to go the extra step. Start learning experientially; which means that instead of merely reading about the subject, you experience it. Find a museum. Go into the field. Reach out and contact people who are on the front lines of discovery. You get vacation days; use them. I’ve studied ancient Egyptology since I was a child, but what I learned when I actually visited and toured privately with an Egyptologist put my reading to shame. Experience is the ultimate teacher.
Share your knowledge
I will finish on a subject that is close to my heart. You have acquired all of this knowledge, this experience, and made these subjects a part of you. Now, pass it on to someone else. We are, each of us, stronger together than we ever could be separately. This fact is not lost on the many, many user groups which exist throughout the world. Individuals have banded together to share their experience and knowledge, to help each other learn, and everyone benefits from this altruism.

Several years ago, I helped establish a Linux Users Group in my home town. Initially, there was a lot of interest; it waned when the people in charge grew preoccupied with bureaucratic aspects rather than information sharing, and the group suffered and eventually went defunct. I have since stuck with attempting to organize people via electronic means such as my blog (http://standalone-sysadmin.blogspot.com). There are still wonderful opportunities for in-person user groups, so don’t let my experience dissuade you from joining or starting your own.

A great alternative if you have trouble locating or starting a group in your area is to join an online community of like-minded people. A good place to start is the The Sysadmin Network, a group of systems administrators who all want to improve their skills and increase their knowledge. Join a formal group, such as LOPSA or SAGE, that encourages you to grow professionally as well as intellectually. Only by pushing the boundaries and aligning yourself with others who strive for the same goals can you reach your maximum potential.

Tuesday, May 5, 2009

Wahoo! 500+ Subscribers!

I've known for a while that I was getting close, but this morning, Google Reader confirmed it:



Standalone Sysadmin has over 500 subscribers! Wahoo!

For every one of you who reads this, thank you.

I want to especially thank a few other blogs that have sent a ton of visitors my way:

(in order of visitors sent, according to Google Analytics):

Bob Plankers at The Lone Sysadmin
Michael Janke at Last In First Out
Jeff Hengesbach at his blog
Chris Siebenmann at CSpace
Ian Carder at iDogg
Nick Anderson at cmdln.org
Phillip Sellers at Tech Talk
Ryan Nedeff at his blog

I also want to thank the various blog aggregators who have picked me up. I'm probably missing a couple here that I don't know about:
PlanetSysadmin
Sysadmin Blogs /planet
Technorati
LOPSA-NJ

And anyone else who linked to my blog,

Thank you very much!


As always, please drop me a line at standalone.sysadmin@gmail.com with suggestions or feedback of any kind.

Saturday, May 2, 2009

Resizing storage LUNs in Linux on the fly

I'm in the office working today and I figured out the best way (for me, anyway) to resize SAN LUNs without rebooting the server. I figured that I should document it.

To paraphase Linus, Only wimps use internal wikis: _real_ men just document their important stuff on blogs, and let the rest of the world aggregate it ;-)


Alright, the background first. Suppose I've got a SAN storage array, and I've created a LUN (essentially a disk segment) that I've presented to my server. Because I'm intelligent and plan ahead, I'm using LVM to manage it.

Lets say I make 50GB for "data". I make my physical volume on the device (in this case, /dev/sdb1), I create a volume group for it, vgData, and I create a logical volume inside of that one, lvData. I can now make an ext3 filesystem on /dev/vgData/lvData, and mount it to /mnt/data. I run "df -h" and it shows just under 50GB free. Awesome.

Now, I want to expand my 50GB drive. I need another 10GB, so I log into the console on the SAN storage box, and I resize the "data" virtual disk by 10GB. It's now sitting at 60GB. Step 1 complete.

Now, if the world were sane, you could do "fdisk -l /dev/sdb" and see the free space. But it isn't, and you can't. It will still happily report the old size. There are all sorts of google results for changing that, but I've found that none of them will actually do it until you perform the following steps.

Unmount /mnt/data:

# umount /mnt/data

Make the volume group unaccessible:

# vgchange -a n vgData

Unless you perform that last step, the lvm2-monitor service keeps the /dev/sdb1 device open, which means everything that we're going to perform next won't matter. You *HAVE* to stop all access to that disk (at least with the current CentOS kernel).

Now that the filesystem is unmounted and the device isn't in use, issue this command:

# echo "1" > /sys/class/scsi_device/$host:$channel:$id:$lun/device/rescan

where $host:$channel:$id:$lun are whatever combination of scsi path you have to your device. Mine was 2:0:0:0 since it was the first (zero'th?) disk on the 2nd controller. If you do an ls in /sys/class/scsi_device you'll see what is available on your system, and to my knowledge, rescanning the wrong controller won't hurt it, so if you screw up, it's not tragic.

Now if you have done things right, you should be able to run the fdisk -l /dev/sdb and see the new size reflected. Horray!

I fdisk'd in and added a 2nd partition (/dev/sdb2), ran pvcreate on it, extended the volume group on it, used lvextend, and then made it available with vgchange. [Edit] As anonymous mentioned in the comments, pvextend should also work at this point, though I haven't tested it yet. There's no reason it shouldn't.[/Edit] I finally mounted /dev/vgData/lvData and used resize2fs to grow it online. Now a "df -h" returns the "right" number, without ever having to reboot the machine.

Maybe in the future it will be possible to do it while the filesystem is "live", but for now, I'm using this technique since it's better than rebooting.

And if I'm wrong and you CAN do it with a live FS, please let me know. I'm very interested in a better way.

Friday, May 1, 2009

Random thoughts on Slashdot

I wanted to make a quick reply to someone on slashdot who suggested adding a 5th octet to IP addresses rather than migrating to IPv6. I meant to write a really quick reply, but it got drawn out. I got done with it and thought that some of you might have thoughts on it:

Awesome idea. We'll give Google 1/8, The government can 2/8, IBM will get 3/8, etc etc etc

Same problem. The ipv6 is not a "bad" idea, it's just sort of like...imagine in 1950s if the phone company decided "we could go with area codes to subdivide numbers to prevent running out, or we could use letters AND numbers".

Can you imagine the upheaval?

In a lot of ways, that would have been even easier to deal with, because everyone's phone was owned by AT&T. New phones could have been issued without too much problem.

No, imagine it instead in the mid 1980s. Ma Bell doesn't own the phones any more, in fact there are tons of cheap phones available, cell phones are starting to come out, and there are still rotary AND push button phones.

That's more like what the IPv6 switch is like. Do you give the new people 2 numbers, so that grandma can still call them? How long is it before you stop accepting legacy phones that only have 10 dialing options? How the hell do you get DTMF to work with 36 numbers? Do we need area codes? It would be weird without them, but we don't really need them.

The equivalent of these questions are still being asked. Just a couple of months ago, there was a huge to-do about NAT and IPv6. "IPv6 is a world without NAT". The hell it is. My internal routers don't get publicly routable IP addresses, even if I have to NAT back to IPv4.

When the wrinkles get ironed out, we're going to wonder how we ever did without it. During the transition, it's going to be hell for everyone (with the possible exception of the clueless end user, who might have to buy a new router at most).