Monday, October 5, 2009

Info For Those Considering AT&T's Microcell

I wanted to send this out in case others are considering purchasing AT&T's Microcell.  Before this becomes an "AT&T stinks" thread, let me preface that my particular issue is with my apartment.  It's a proven dead zone for AT&T, Sprint, and Verizon.  Also note that the Microcell only works with AT&T 3G phones.

Yesterday, I purchased the Microcell from the AT&T store in Cary. After taxes, it was about $162. The sales rep also informed me that if I did sign up with the $20/month "unlimited minutes" Microcell plan at the time of purchase, there was a $100 mail in rebate for the Microcell.

After some issues trying to set me up with the rebate deal (a Microcell with 5 months of unlimited minutes?), we found out my account wasn't eligible since it was created in the Washington, DC Metro area. The deal is only good for plans created in these trial markets. This also goes for the ability to purchase the $20/month for unlimited minutes. The sales person did say that they've been told the Microcell was going to be further rolled out in January, and I'd be able to get the $20/month unlimited minutes plan then (sadly, minus the $100 rebate). However, the Microcell itself only has geographical constraints, so it would still work in my apartment.

Once I got home, my set up was pretty easy. The only caveat is that the Microcell needs to be able to get a GPS signal. That meant I couldn't place the Microcell where I really wanted to in my apartment. Once it was powered on and connected, it took about 90 minutes for the Microcell to connect, register with AT&T, get its GPS signal, and be ready to go. When it was online and ready, I received an SMS message saying something to the equivalent of "Thanks! You're Microcell is ready."

I now have full signal in my apartment (as opposed to none). A few test calls and text messages worked fine as well. I haven't tried web access over Microcell, since I use my wi-fi with my iPhone. You can have up to 10 phones that can connect to your Microcell. The numbers have to be added to your online AT&T account management page for others to be able to use it. Right now, it's just my wife's and my cell phone.

Wednesday, July 8, 2009

For DBA's, new blog to follow

An old co-worker of mine has started The Bungling DBA blog. Even though we now live on separate coasts, he and I are forever bound together as Lunch Twins. I'm not sure if this blog is also a bit of friendly ribbing at my expense or not, but I applaud him for starting a tech blog and look forward to his updates.

Friday, June 26, 2009

Server "uptime" bragging

I recently saw a blog post about someone showing their server having an uptime of over 400 days, and wanting other readers to reply with some of their larger uptimes. Quite a few people obliged, and the numbers were in the hundreds of days. This made me think, "Is this really a "good thing" anymore?"

Some questions that come to my mind when I see servers with long uptimes are

1.) Are patches being applied? There are a lot of security and performance updates that are released within a year. Some may not be critical, but are you being responsible and diligent in keeping your server up to date and secure?

2.) Does the server need to be up for so long because it is a single point of failure for a critical service? Hardware gets cheaper and cheaper, and many services can be loadbalanced or clustered. With the popularity of virtual machines, even more so. If this service experiences a failure, will your customers or users notice? How long will it take to restore its functionality?

3.) Do you know if the server will restart correctly in the event something causes a reboot? This could be unexpected, like a hardware or power failure; or expected, like applying kernel updates. Over a long period of time, a lot of small changes can happen that could cause startup scripts to break, but would go undetected until you have to restart. Or, your hardware just might not want to go through a restart for whatever whacky reason.

I guess what I'm saying is, having regular maintenance reboots aren't a "bad thing." Yeah, it used to look cool to have a server up for 600 days, but I don't think it's really worth it now.

Monday, June 1, 2009

Free download of SnagIt (through 6/5/2009)

Some of you may like to take screen shots for documentation, blog posts, or other troubleshooting info exchange. One product I've used that is more useful than Ctrl+PrintScrn or Alt+PrintScrn is SnagIt. Until 6/5/2009 5PM EST, they are providing free download and registration key for their SnagIt 7.2.5.

http://www.techsmith.com/Covermount/covermount.asp?ID=8

Note this is for PC only, and is not compatible with Windows Vista.

Wednesday, May 20, 2009

Rumor: iPhone and SlingPlayer over 3G without Jailbreak

I've overheard a couple rumors that users have been able to use the SlingPlayer application with their iPhone over 3G (as opposed to Wi-Fi) without having to jailbreak their phone. Apparently, when they connected their iPhone using a Cisco VPN solution, they were able to use the SlingPlayer application. It got me thinking if this rumor is true, would this trick work with another type of VPN server that is compatible with the iPhone? Perhaps running a PPTP server at home, such as Poptop?

I haven't been able to confirm or deny this, since I'm not willing to pay $30 for the iPhone app, and I'm not sure if it will work with my original SlingBox, but I wanted to share for those that would be interested.

Saturday, March 28, 2009

Monkey - House: A Big F-U to GoDaddy

Heads up to those that, like me, trust their domain registrations with GoDaddy.

Monkey - House: A Big F-U to GoDaddy

It's ridiculous what they can do when your domain lapses. I guess I got distracted by their advertising.

Friday, March 27, 2009

Getting the VMware Boot/POST screen

I was trying to re-kickstart an install of a Linux server in my VMware cluster, but I couldn't get the VMware Boot/POST screen so I could choose the PXE network install option. I also was having a rough time with my Google search queries to find the answer. Since I had a tough time finding it, I thought I would write it here, so if anything, I could find it again.

I had to modify my .vmx file for my virtual machine and add the line

bios.bootDelay = "10000"

The numeric value is the number of seconds the POST/Boot screen is shown. So in my example, this would be 10 seconds, which deceptively goes by pretty quick.

Thursday, February 26, 2009

DNS and Asset Information

Saw this post at TaoSecurity today about using DNS as a tool for Asset Management.

http://taosecurity.blogspot.com/2009/02/asset-management-assistance-via-custom.html

It toys with the thought of creating custom DNS records that identify asset owners. It's an interesting thought that was partially used at my last job. Our senior sysadmin had an unwritten policy that any server added to our internal DNS would also need a TXT record that contained information such as the hardware serial number. I'm not sure how many characters a TXT record supports, but I'm sure you could add other info as well. If you weren't sure who the contact person for a server was, or where it was located, you could "dig servername txt".

Here's an example of a DNS TXT record entry.

http://www.zytrax.com/books/dns/ch8/txt.html

Tuesday, February 24, 2009

Man page reading tip

I usually keep my ssh windows pretty small, but it makes them a pain to read man pages. The way I read man pages now is using Google and finding man pages posted online. That way, it's as easy as reading any other web page.

For Linux, I use the search string (minus the quotes) "man linux command", and it usually pulls up the appropriate man page on http://linux.die.net.

For Solaris, I'll use the search string (minus the quotes) "man sunos command", since the syntax or switches of the Solaris command may be slightly different than the Linux one.

Another bonus is that commands and configurations in the "SEE ALSO" section are usually hyperlinked to the corresponding web entry. There are probably browser plugins or toolbars that will accomplish the same thing, but this is universal and lightweight.

Friday, February 20, 2009

NFS with VMware

I came across this blog post, and it piqued my interest.

http://blogs.netapp.com/virtualization/2009/02/mythbusters-nfs.html

It briefly suggests that NFS is a viable alternative for VMware instead of SAN. I don't have the resources or clout to try this, but I'm curious to how well NFS would work. This information could potentially be useful for people that don't use NetApp also.

Thursday, January 29, 2009

Modifications to stock CUPS server

I've been tasked with setting up a Unix print server, since the current one runs unmanaged on a PC beneath someone's desk. Since we use RHEL4 for our servers, obviously this is going to be using CUPS. Setting up CUPS isn't too painful. The web interface is pretty easy to use. My concern though is when you click on the Administration link and log in, it continues to use plain-text HTTP to pass the credentials.

Here are the few things I've changed to make me feel a little bit less uneasy.

First, I created a self-signed SSL certificate and copied the key and crt to /etc/cups/ssl.

Then, I enabled the following in cupsd.conf

ServerCertificate /etc/cups/ssl/server.crt
ServerKey /etc/cups/ssl/server.key
SSLPort 443


I still have the stock port 631 listening as well.

Finally, I modified the index HTML page for the CUPS service. I found this located in /usr/share/cups/doc/index.html. I edited the two hyperlinks for administrator so that it pointed to "https://print.example.com/admin". I know this won't stop people from using "http://print.example.com:631/admin", but at least if they are just clicky-clicky, it will have them log into the administrative interface using HTTPS. I'm not sure how to force users to use HTTPS when accessing the administrative page in CUPS, but at least this is a start.

Now, I don't feel like my usernames and passwords are floating around in the clear when it comes to CUPS.

Friday, January 16, 2009

Thought on malware spreading through known vulnerabilities

The BBC has an article today about the spreading of an Internet worm to millions of PC's (aka Conficker, Downadup, or Kido). Interesting enough, this vulnerability was addressed by Microsoft in MS08-067 on October 23, 2008. The BBC article then obviously states "users should have up-to-date anti-virus software and install Microsoft's MS08-067 patch." What I found interesting was the estimated top locations of infections.

China 38,277
Brazil 34,814
Russia 24,526
India 16,497
Ukraine 14,767
Italy 13,115
Argentina 11,675
Korea 11,117
Romania 8,861
United States 3,958
United Kingdom 1,789

I wonder how this ranking compares to the total number of pirated/unsupported instances of the operating system running in each country, as in "not recognized as a 'Genuine' license to Microsoft and therefore unable to apply patches from Windows Update." I'm wondering if the spread of malware like this that targets personal PC's or office workstations would be significantly reduced if Microsoft either opened up their Windows Update service to non-verified owners, or changed their pricing to be more affordable for its worldwide users.

Monday, January 12, 2009

Deleting a Solaris RAID created with Volume Manager

It seems most of my "howto" posts recently have been Solaris related. The main reason for that is that my Solaris admin knowledge is weak, requiring me to look up Solaris-specifc tasks. And when I learn how to do something, I like to share with others the answer, just in case they didn't either.

We have a Solaris 10 host in the lab that we do software tests on, and I somehow have been tapped to admin and be in charge of its OS. The production hosts we create for clients use RAID-1 with two disks, so the orignal install steps specified using the Sun Volume Manager using the different "meta" commands (metadb, metainit, metattach). Without much warning, they've gone and changed the install steps to use the simpler "raidctl" command. My problem now is that I have to reinstall the lab test host, and setting up the RAID with raidctl. However, I have no idea how to delete the previous RAID configuration. Thankfully, I came across this. It appears all I had to do was run "metaclear -a".

Tuesday, January 6, 2009

My take on "Which Unix to learn"

I came across this post on TaoSecurity today, with Richard Bejtlich's suggestions for an "Introduction to Unix." Like text editors, the discussion about which distribution of Unix to learn can be considered a "religious" argument. If you're a regular reader of his blog, it's no surprise he suggests FreeBSD. He does clarify and say if you're running a server, he suggests FreeBSD; and if you're running a desktop, he now prefers Ubuntu. And if you're still adamant about running Linux as a server, he suggests Debian.

My opinion differs, depending mainly on "why do you want to learn Unix?" If the answer is related to increasing your work/resume skill sets, I would have to disagree about using FreeBSD or any of its other BSD derivatives (Net, Open, etc.). In my experiences of being an admin or some other support role, I have yet to encounter a *BSD server. I'd encourage someone to use a distribution that they would encounter in a corporate environment. The Unix server OS's I've had to support have been Red Hat (now Fedora), Red Hat Enterprise Linux (also known as RHEL), and Sun Solaris. To avoid having to pay licensing fees, you could substitute CentOS for RHEL.

If I were asked the question, I would suggest a "major" Linux distribution or Solaris/Open Solaris instead. I would think their device names, software packages, and file system organization would help with familiarity when trying to translate the knowledge learned to a corporate environment. Although this may be an unfounded opinion, I also think that there are more support options and supported software using them, rather than using *BSD. I do agree with Bejtlich that if you want to run a Unix distribution on the desktop, to stick with Ubuntu, since it seems to "just work" when installed and there are less configuration headaches. Or you can just use a Mac if you want a Unix desktop (troll... and yes Mac could be considered a BSD variant).

Monday, January 5, 2009

Managing "To Do" items for work

I'm trying to decide the best way to manage my tasks, projects, and "mini-projects". I define mini-project as something larger than a task, but I'm my own manager and don't need to submit a project plan. Maybe I should've asked for Tim Limoncelli's "Time Management for System Administrators" as a Christmas gift.

The options I know of are:

"Remember the Milk" - Pros: Seems very extensible, geeky, multiple ways to manage, etc. Cons: I doubt my company would like me storing information with a third party.

Using "Tasks" in Outlook - Pros: This would be stored and backed up at work. Cons: I like to keep my interaction with Outlook at a minimum. I just don't like the interface for it, anyways.

Creating tickets in a case management system - Pros: I've done this at previous jobs creating cases assigned to myself for tasks and mini-projects. Cons: My employer takes its case tracking statistics seriously.

Personal Wiki - Pros: Uses a web browser to edit. Some Wiki's have version control and search capabilities. Cons: Some wiki's require running on a server and using heavyweight services (is that overkill?). Is it really the best tool for the job?

Right now, I'm using TiddlyWiki. I feel the interface is a little clunky, but does a lot of what I want it to do. For instance; it's lightweight (does not require a server or a database), can be portable (it's just files), only requires a web browser and access to its files, and has some searching capability.

I'd be interested to know what other people are using to manage their tasks and other assignments that are not necessarily part of the "everyday routine."