Seems like more and more I’m finding applications that have little or no error handling strategy, which is a real shame. The job the application is performing is important to me: I want to use it to save myself the time and headache of doing something repetitive or mind-numbing. Unfortunately, while the application does its job well, it fails on less than perfect input. Now, I’ve been using computers since I could barely say “computer,” so I’m well-versed in telling my computer what it wants to know, in the format that it wants to know it. And I’ve become accustomed to looking at tracebacks and using other tools (strace, ltrace, gdb, etc.) to find what is breaking, and correct my input. However, that doesn’t work for your average user–even if your average user is a developer. The end result: the application ends up with a bad rap pretty quickly. This is especially true if you have a command line application, and you have a bunch of users who aren’t command line junkies.