Friday, September 14, 2012

The messy side of metrics

As I have said before, I'm a big fan of diving into the weeds and figuring out data. Sometimes this is a lot of hard work with very little to show for it. Every once in a while it has spectacular results.

I once worked at a company that had an unsearchable knowledge base. I won't go into the technical details, but it was difficult to collate the data in it. Consequently there had been very little reporting of customer problems.

I decided to investigate the knowledge base. I was able to output a massive text file of all support calls logged over the previous 18 months. I went through it, cutting and pasting issues into other documents, which was about the crudest way imaginable to collate electronic data. It seemed to take forever, but in reality it took about 24 hours of heads-down concentration.

I discovered a number of things, but the most dramatic was that a single error message was responsible for a high percentage of support calls. It was relatively easy to write documentation to help users who got the error so that they didn't need to call customer support. (I had to write a spec for a compiler to attach the doc to the error message, but that is another story.) Afterwards the number of customer support calls fell dramatically.

It's sort of embarrassing to admit to the lack of technical savvy in my approach, but I think that's what makes the story worth telling. There isn't always a nifty technical solution for data analysis. Saying "It can't be done" often just means, "There's no easy way to do it." Also, when you handle data manually you notice things that wouldn't be apparent in an automatically generated report.

No comments:

Post a Comment