Meme take 1

So as I’m sure you’ll imagine I’m not big on blog memes or chain letters, but I thought of one that might be fun, so I thought I’d give it a shot especially since my blogging has dwindled a bit lately.

So the meme is Today’s Hell, the idea being that you describe a situation that could be considered a repeating Groundhog’s Day-style hell-dimension. It doesn’t have to be the worst thing ever, just something funny or annoying that you want to vent about. If you’re one of the 4 people who reads my blog, make one up yourself and pingback/trackback me!

Today’s Hell: I’m stuck at a busy traffic light behind two morons on their cell phones in their McCain-stickered Escalades who are too scared/stupid/not paying attention enough to turn left at the same time. Every track on my CDs and every station on the radio is playing “What a Fool Believes” by Michael McDonald or the Doobie Brothers or whatever the hell. A strong waft of patchouli suddenly floods the car and suddenly I remember that I have HR-mandated sensitivity training once I get to work.

Your turn.

Dropbox

If you use multiple computers, you have to check out DropBox. It hooks into your file-manager (Linux, Windows, Mac) and gives you a special folder that’s synced to all of your machines as well as being accessible via the web (both authenticated and a “Public” folder.) If you’re even mildly interested they have a great demo video on the link above.

If you have your own web hosting, I saw a great tip on the Planet GNOME feed for setting up a URL that’s easier to remember than the one DropBox gives you.

If you’re running Linux, it’s also possible to use DropBox with no file-manager or GUI of any kind. (Also seen on Planet GNOME.)

Email is a spectacular failure

As the kids say “epic fail.” I’ll start with that so that hopefully my highly radioactive core of curmudgeon doesn’t shine through quite as brightly.

Now I’ll tell you a bunch of stuff you already know. Email works like this:

  1. A user fires up an email client and composes an email and clicks “Send.”
  2. The email client connects to its configured SMTP server and hands off the message.
  3. The email server looks up the MX (that’s Mail Transfer) DNS record(s) for the recipient(s), connects to the mail server(s) and hands off the message.
  4. The recipient(s) fire up their email clients and connect to their configured POP/IMAP/etc servers and retrieve the message.

It makes sense. It’s simple. It scales (just ask spammers!) I can only speculate why people don’t like it. Maybe email clients are terrible. Who am I kidding email clients are terrible. Even the best of them are awful for any number of reasons. But I’ll wager it’s more than that.

More stuff you already know: People blog, corporations blog, everybody blogs. Even if it’s not a blog, one still needs a way to notify one’s constituency that “I put something new on my website!!” No one wants to go visit a site over and over only to find nothing has changed. So feeds and feed readers were born. And to me, at least, this was a somewhat confusing turn of events.

Email, you see, is capable of sending HTML email and many, dare I say “most” email clients can render HTML email. If it’s not painfully obvious where I’m going here: it would seem quite logical to send an email to interested readers every time ones site is updated. Heck, it could even contain the full HTML of the update if you wanted. That way, blogs could work just like mailing lists of yore. Readers could comment on your blog entries via email. Things don’t work that way, though. They work like this:

  1. I write my snarky whiny blog entry and click Save in my blog software.
  2. The blog software converts my blog entry to at least one, but probably two different formats (RSS/ATOM.)
  3. Optionally, the blog software might have a second set of feeds for comments. Doing this same double-conversion on them as well.
  4. Should someone be interested, they use a feed reader to subscribe to one of these feeds.
  5. Their feed reader makes repeated requests to my blog software asking if anything is new.
  6. “Asking if anything is new” gives this solution too much credit. What it really does is ask about caching information (And that’s if the blog software has implemented caching.) and if the cache time-to-live has expired, it downloads the entire feed from the site, not just what’s new.
  7. Then it goes through the whole feed comparing it to what its already seen (a process that’s more difficult than you might think) and if something is, in fact, new it alerts the user that something is new.

Wow, that’s a lot.

The worst of these feed readers are really nothing more than glorified bookmarks. The names of the sites and maybe a favicon are presented in a list. The feeds that have unread entries have a bold title and a number of unread items next to the title. These are the worst because they go through all of the above nonsense for next to no value.

For readers like this, feeds could simply be an integer. If the last number you got from my site was 110 and now it’s 112, you know to show your user articles 111 and 112. In fact, this alone would be a huge improvement on the current system because feeds all have their own bandwidth. Since my feed doesn’t know when the last time you asked for it was, it can’t possibly know how many things have changed since you last asked. If I’ve updated 10 things, but my feed only contains 5, you’ve missed 5 updates.

Only slightly less awful than the bookmark style feed reader is modeled after, you guessed it, an email client. This feed reader combines feeds into a meta-feed that’s presented in some variety of chronological order — a lot like email. Google Reader’s “All Items” is an example of this type of feed reader. All that work just to simulate email.

There’s a third kind that barely qualifies. It doesn’t track read vs unread, instead presenting the latest N headlines from the feed. It’s up to you to remember what you’ve read and haven’t read. Last time I checked (which was a long while ago) netvibes.com’s reader was like this. I’m sure it’s improved since then.

As you may have read, when I was at OSCON 08, a couple of guys are suggesting that large sites abandon this model in favor of a model more closely resembling email. I think their numbers were something like Site X asks flickr 7 million times for information about 54 thousand users when only 6 thousand total (not a subset of the 54k) updates were made. Obviously that many requests for so few updates is dumb.

I’m not trying to say that feeds are bad. I use them every day and love it. They’re especially great for embedding information from one site into another like my delicious bookmarks on my blog. That’s a use case other than the one I’m whining about, however.

The downside of having my blog software email a mailing list with every update is that the reader loses some anonymity. The upside is that an amazing amount of resources are saved. The feed bandwidth problem goes away. And you don’t have the 7 million for 6 thousand problem.

Of course no one will ever go for it. One, we’re way to far down this road. Two everyone hates email. Web-based bulletin board systems are a hugely popular and widespread testament to the fact that everyone hates email. At least syndication feeds have the whole “embed some part of my site in your site” thing going for them. Web BBSes have nothing at all going for them. A threaded email client (especially one with a kill list) wins over the web BBS 100% of the time and especially around security.

My confusion is renewed in seeing the rise of Twitter which takes the email-like instant messaging and gives it all of the scaling issues of syndication feeds and worse.

The only win I can give twitter or the web BBS is discover-ability, but both have the option of being closed to non-subscribers.

Part of the problem with email is that clients make it too difficult to sort an average user’s mail. I know I’ve talked to web BBS fans who cringe at the thought of a mailing list because they see it as fire-hose in their inbox. Maybe it’s not a difficulty thing. Maybe it’s just a task no one wants to do. If clients were smart about mailing-list headers….

Anyway. Blah blah blah. This post has been banging around my head for more than a year which is why it might seem tardy. I’m not saying anything new here, but I wanted to say it too, damn it.

Chrome

So Google’s launching a new browser called Chrome. It’s based off WebKit which is Apple’s re-do of KHTML which is the HTML rendering engine created for Konqueror, the KDE browser.

If the post is to be believed, it promises to be extremely cool, especially for people with multiple CPUs. The sand-boxing and new javascript engine also sound like big improvements. Too bad it’s windows only for now and hopefully it won’t be one of the many “pay no attention to the evil behind the curtain” Google offerings. The fact that its open source, doesn’t mean that they’ll accept anything from the community, either. We’ll see, I guess.

I also wonder how well a new browser will fare. Opera’s had arguably the best browser out there for years, but no one really cares (including me) even now that it’s free. In one way I’m surprised at Safari’s success, but then it’s less surprising considering the “do what you’re told” mindset of the cult of mac. IIRC, there’s a windows version of Safari, but I don’t know of anyone who uses it nor how well it’s doing.

The competition will hopefully spur innovation and help the browser that I’ve come to like even if I don’t end up using Chrome.

Be yourself, even when you’re root

Ever run a command only to realize you’re not root but need to be? Of course you have. What if that command was long and painful to create? There’s no reason, Dude, to not have access to your bash history even after becoming root vi ‘su’.

function su () {
    local SUUSER=root
    local ORIGU=$USER
    local ORIGG=`groups | awk '{print $1}'`
    if [[ $# -gt 0 ]] ; then
        local char=`echo $1 | cut -c 1`
        if [[ "$char" == '-' ]] ; then
            /bin/su $*
            return $?
        else
            local SUUSER=$1
        fi
    fi
    #append recent history to the history file
    history -a
    /bin/su ${SUUSER} -c "env USER=${SUUSER} HOME=${HOME} ${SHELL}; \
          [ -f ${HOME}/.ICEauthority ] \
          && chown $ORIGU:$ORIGG ${HOME}/.ICEauthority ${HOME}/.viminfo"
    # Clear the history list by deleting all the entries.
    history -c
    # Read the contents of the history file and use them as the current history.
    history -r
}

This function honors the ‘su -‘ syntax in case you need it. If you have special permissions on your ~/.ICEauthority or ~/.viminfo, you’ll need to make adjustments obviously. Remember that the backslashes need to be the last characters on the line. They’re only there to make it more readable, so feel free to ditch them in favor of a longer line.

Now when you ‘su’ or even ‘su someotheruser’ you’ll get to keep your own history.

A couple of things that are sort of about Linux

First and foremost, I have no idea what scrollkeeper is or does, but whatever it’s doing: it’s doing it wrong.

I installed a couple of boxes yesterday that had modest hardware and they each spent several minutes running scrollkeeper-update at the end of the install and then they both got to run it again when I installed updates. And in case you’re unfamiliar, this is a process that consumes all of the CPU it can get. How can anyone find this acceptable?

Actually, I lied. I do know what it’s for and that’s the saddest part. It’s for indexing help documents. I guess since most people don’t like man pages, someone felt the need to write this system-crushing utility to index some help documents I’ll never ever read.

Wait, I’m wrong, that’s not the saddest part. The saddest part is that every distributions packaging of GNOME (I guess, based soley on the things that threaten to be removed if I try to uninstall scrollkeeper) make binary packages depend on this awful piece of software. If you want to use gnome-terminal (or whatever,) you’re stuck with scrollkeeper. What’s wrong with having a gnome-doc package or something?

I could guess about why it’s slow… I see those horrible letters “XML” some of the dependencies, so I could easily point at that, but of course it would only be speculation. While I’m speculating, I’ll go ahead and offer that I can only imagine that were I to use these help documents sans the helpful indexing of scrollkeeper, I’d actually spend less time waiting on my computer than I do with the help of scrollkeeper.

To establish some “cred” before my next amazing feat… er complaint, I’ll inform you that I’ve been using Linux since 1994. I’ve been a professional admin for 8 year or so. To top it off, I think I have a fairly good sense of humor. In fact, on more than one occasion I’ve had friends and even total strangers describe me as “hilarious.”

With that said, I feel fully qualified to say that User Friendly is about as funny as Sinbad which is to say: not. I’ve never laughed. I’ve never smiled. In fact the most positive response I’ve ever had was mild annoyance. I can only imagine that this comic has a following because people feel that they need to laugh to be part of some community that only exists in their own minds.

Linux magazines, hear my prayer. Stop syndicating this crap.

Various civic… uh…

Wouldn’t your life be better if you had audio CD’s of the full soundtrack to The Big Lebowski in your car? Of course it would!

These instructions assume Linux, mplayer and k3b and that you have the Universal edition of the disc. I don’t know the track count for the original Paramount disc.

Insert the disc.


for n in $(seq 1 22); do mplayer -vc null -ao pcm:fast:file=track${n}.wav -vo null dvd://1 -chapter ${n}-${n} ; done

However! For some weird reason, track12.wav will be in French if you do the above!

My solution was:

mplayer -vc null -ao pcm:fast:file=track12.wav -alang en -vo null dvd://1 -chapter 12-12

Since I already had the other ones ripped I just ripped the one track with the language set. You can probably just add -alang en to the first loop and you’ll be fine.

The .wav files will be at 48kHz which is above the RedBook standard of 44.1kHz, so you’ll have to resample them. There’s probably a way to do that while ripping, but I’m too lazy and let k3b do it for me. I’d also suggest you let k3b (or whatever) normalize the tracks because as ripped, they’re very quiet. I used “normalize-audio” before I saw that k3b could do it for me.

You really should do this. You’ll thank me.