For some time I’ve been wanting to upgrade the firmware (built-in software that controls most modern electronic devices) in my wireless router. I use a Linksys WRT54G, which is a typical wireless router that acts as a wireless access point, 4-port hub and internet router, all in one small package.
To some degree the reason for performing an upgrade like this is simply because it’s possible, though there’s no doubt that the new firmware does offer a much cleaner design than the factory default software; almost every interface screen is improved compared to the original. In addition, there are bug fixes and new, advanced features such as the ability to increase the transmitter’s power, more informative status pages and the ability to manage bandwidth usage by application (so that bit-torrents don’t consume the entire network, ahem).
Additionally, while on vacation I haven’t checked my work email (not even once) for the past week (a true rarity for me!) and this mini-project (3-4 hours last evening) provided a bit of a technical outlet for me. Also, I suppose this report is related to the concept of experiential learning we’ve discussed at work; I hope it’s useful for someone contemplating upgrading their router.
I began by refreshing my memory about the topic of wireless router firmware upgrades and found an excellent article that provides a good history and overview of the subject: The Open Source WRT54G Story.
The Linksys Routers Tricks, Tips and Firmware page provides some very useful background information.
A variety of firmware upgrade options are available, and this 3rd Party Firmware Comparison provides details about 4 of the most popular choices.
After a while I began zeroing in on a selection, and this first-hand account of someone’s experiences upgrading their own router helped solidify my choice: DD-WRT.
I quickly found a great (clear and accurate) set of WRT54G upgrade directions and shortly thereafter was back on the internet via a router that had just had the electronic equivalent of a brain transplant. After I finished my upgrade I found another page with upgrade directions that looks pretty informative and includes a photo gallery of screen-shots taken during the upgrade process.
Just in case there were any problems, I had a great router firmware upgrade troubleshooting page open for reference; thankfully it wasn’t needed.
Overall, while this wireless router upgrade isn’t necessary, it isn’t terribly hard to perform (though carefully following the instructions is important), and does provide both a better interface and some extra new features, including the ability to increase the transmitted power (useful if you have a computer that is a bit too far away from your wireless access point).
One caveat if you’ve read this far: only a few modern routers support this type of upgrade, so carefully read the above pages to ensure you have (or purchase) a model/version that will work with these router firmware upgrades that are available. If you have a newer model, the DD-WRT “micro” distribution is said to work, and if so is still likely better than the default software provided with the unit.
I’ve been experimenting with a variety of virtual machines running under VMware’s Server product, and hosted on my home linux computer. Along the way I’ve run into a number of fairly standard issues, which I resolved fairly quickly. Then were a few issues that were tricky enough to resolve that I decided to note them for future reference.
VMware Console Startup
One of the first problems I encountered was that my VMware installation had a problem starting correctly after each reboot. It prompted me to rerun the configuration script, vmware-config.pl, every time I started the server console. Obviously not a good long term plan, and after a bit of research I came up with the solution, described at Jeremy Coates site.
I’ve put a copy of the steps in the extended entry area, just in case the primary site goes offline. In addition to the steps noted above, I found it necessary to add the following line to my rc.local file:
Starting a Stored Image
One of my experiments is to compare the speed of actual machines versus their virtual counterparts. I do that by creating a Norton Save and Restore image on the original computer, and then restore that image to a virtual machine under VMware Server.
One of the restored images booted part way and then halted with a STOP error, similar to this:
*** STOP: 0x0000007B (0xF9DA0528, 0xC0000034, 0×00000000, 0×00000000)”
The fix turned out to be fairly straightforward. One simply needs to set “scsi0.present” to “FALSE” in the appropriate VMware config file:
The original tip came from the VMTN site; search for SCSI on that page.
Solaris Networking Config
Over the years I’ve had ample opportunity to hone my HP-UX sys admin skills, but have only ever seen or touched a Sun computer running Solaris maybe once or twice in my whole life. Sun has a very solid reputation and with the advent of their free X86 port of Solaris, and a virtual machine environment where it’s easy to setup new machines, I couldn’t resist satisfying my curiousity. I downloaded and installed Solaris 10 with no problems, but the auto config of my network didn’t work. I quickly found exactly the right recipe to get on the air at this cuddletech site.
I’ve copied key segments of the aforementioned code/instructions in the extended entry area in case the source sites become unavailable.
VMware Console Startup
Here’s the code needed to fix the VMware console startup problem.
Add the following just under the start) line in /etc/init.d/vmware start) # Start insert if [ ! -e "/dev/vmmon" ]; then mknod /dev/vmmon c 10 165 chmod 600 /dev/vmmon fi for a in `seq 0 9`; do if [ ! -e "/dev/vmnet$a" ]; then mknod /dev/vmnet$a c 119 $a chmod 600 /dev/vmnet$a fi done # End insert if [ -e "$vmware_etc_dir"/not_configured ]; then
In rc.local, add the following:
Solaris Networking Config
Please refer to the cuddletech site for more details.
1. Add the name of the system to /etc/nodename. It will be the only thing in this file. This is the common name of the system reguardless of how many hostnames the system really has.
root@anysystem devices$ cat /etc/nodename
2. Add the hostname and IP address to /etc/hosts.
3. Add any appropriate subnet masks to /etc/netmasks. These netmasks are based on the various networks your system might be part of. For 192.168.100.0/24 you’d add the line: “192.168.100.0 255.255.255.0″
4. Add the hostname that you added to /etc/hosts to /etc/hostname.(interface). That is, if you wanted to add a new IP to the “nge0″ (nVidia Gigabit Ethernet) interface, I’d put “192.168.100.42 cuddlistic1″ in /etc/hosts and “cuddlistic1″ in /etc/hostname.nge0. The hostname is the only thing that goes in that file.
5. Add DNS information to /etc/resolv.conf
6. Add the default router address to /etc/defaultrouter. If your gateway was 192.168.1.254 you’d “echo 192.168.1.254 > /etc/defaultrouter”. Again, the IP of the router is the only thing that goes in that file.
7. Now either reboot or, preferably, svcadm restart svc:/network/physical.
I spend a fair amount of time and money working to do all I can to preserve my large and rapidly growing image collection. How large and how fast? 4 years ago, when I first established an online image archive, I had 20 gigabytes worth of images. I currently have almost 140GB. Over the weekend I burned 6 DVDs, backing up the new images from the past 3 months. That was about 20GB worth, all told.
Starting back in 2002 I’ve maintained at least 2 distinct backups of my images. Back then the masters were stored on my linux box, and another set on my windows machine. Currently, the Mac hosts the master copy, and the windows and linux boxes are updated regularly using rsync. Having backups on different platforms and different h/w boxes protects against both platform specific problems, such as an evil virus destroying all of my pictures as well as general h/w issues that could corrupt the images.
In addition, in 2002 I began keeping one copy of all my images on an external hard drive, and continue that practice today. I also have a couple of hard drives with slightly older copies of all my images; that’s in case of silent corruption. (Speaking of, one of my next projects is to devise a scheme to validate, probably via the use of checksums, that my images are unchanged over the years.)
So with 4 or 5 copies I don’t have anything to worry about, right? Well, not from a hard drive crash, and not from a virus, or a weird h/w failure. But, that leaves fire, theft or natural disaster. Any of those could make all of my efforts moot. And so it was with great interest that I read Jeremy Zawodney’s recent writeup about using Amazon’s S3 storage service to backup his systems.
I need to backup 200GB now, and of course that number will grow. My current ISP, pair.com, has excellent service and a very high reliability rating. They’re not the cheapest, but they give good value for the money. On the other hand, their disk limits for my plan are 1.5GB. That’s more than enough to serve my web sites, but no where near enough to backup my images.
The Amazon S3 price for 200GB is about 30$ a month and that doesn’t cover transfer charges; a bit more than I’m prepared to spend at this point in time. And then there’s DreamHost. Like a dream, I re-discovered Shelley Powers* and she was writing about web hosting, among other things. (* She was previously blogging at Burningbird, and I missed it when she moved.)
Anyway, she mentioned dreamhost, so I checked them out. They are cheap!! They are not terribly reliable (though far, far from the worst). I can store 200GB a month for 10 bucks a month!! I won’t serve any critical websites from dreamhost, but I’m happy to have an option for storing all of my images offsite.
Finally! Over the years I’ve tried 3 different KVM (Keyboard/Video/Mouse) switches and have finally found one that works! And by works, I mean it simply works. Not some of the time, not sort of, it just works. I need to share the keyboard, display and mouse between at least 2, and ideally up to 4 computers and have long been searching for a switch that simply works.
Now I have a solution that works. No more using a switch with one keyboard and 3 mice, or using a keyboard but not being able to use the extended keys (like mute!). Or having it work sometimes, but not others (at which point I would unplug from the KVM and plug directly into the computer I was using at that moment.) Towards the end I was using a manual, mechanical switch; I would literally unplug the USB keyboard/mouse from one computer and plug into the other, not using my KVM at all. (The video was handled by virtue of the fact I have 2 video inputs, and am using the higher quality DVI input with the Mac.) Obviously, not an ideal way to share the keyboard and mouse with multiple computers.
I thought, well, if literally switching between systems worked, there had to be a KVM that would work the same way. After some searching I came across a brand I’d never heard of before, but one that got high marks on Amazon. I’ve been using my 4-port ConnectPro KVM for the past 10 days or so, and it has worked flawlessly with my Mac, linux computer and laptop running XP.
I’m very, very happy with this unit and highly recommend it for those who are tired of KVM switches that only work part of the time or with diminished functionality. It should be noted that to switch between computers requires pressing a button on the unit as opposed to using a keyboard shortcut. Slightly inconvenient, but given its otherwise flawless performace I feel it’s a small price to pay,
There’s a saying among computer folks that rang true again last week: There’s two kinds of people, those who do backups, and those who’ve never lost data (usually due to a hard disk crash).
I’d add a third type, those who’ve lost data and perform occasional backups. Other than my photos, which are backed up 3 or 4 different ways, I don’t always perform regular backups on all my systems. (I’m sure everyone is just shocked by that revelation!) On the other hand, I have developed the habit lately, more often than not, of doing automatic daily backups of my primary system.
And thankfully that was the case for my current primary system, a Powermac G5. I run Prosoft Engineering’s Data Backup program automatically each night, with the results being written to an external firewire drive. When the boot drive started failing it was a simple matter (well, it took a few hours as the system was very unresponsive; the drive didn’t die, but it is terminal) to boot from the firewire drive and then carry on. Data Backup isn’t the fanciest program out there, and it doesn’t have a lot of options, but it did do the job for me!
Unsolicited, unpaid testimonial, in case anyone’s wondering.