I once conducted a project to analyse the user feedback that my then-employer received from the feedback button on help pages. We had all been using the feedback to determine which topics needed attention and to understand doc usage. But I quickly saw that there were some pretty serious problems with the data.
The first problem I noticed was that much of the feedback for developer docs was coming from end users who landed in the wrong place. For example, someone needed to find out how to change their password (an end user topic), but when they googled "change password" they landed in my docs that told developers how to use the Cryptography APIs.
Our metrics didn't tell me what their google search was, but I could make a pretty good guess from their comments and then extrapolate out for feedback that didn't include comments. To make sure this was done fairly, I created a list of comments for each deliverable, and then categorized them as "The commenter is clearly in the wrong doc", "Comments that appear to be pertinent" and "unclear". Before finalizing my report I sat down with authors, architects and team leads to go over the list, presented it to a group, and circulated it for comment.
For some deliverables, I estimated that over 75% of feedback came from the misdirected. Worse, it was obvious from their comments that they didn't realize they were in the wrong place. Many of the comments (in constant caps, with profanity and exclamation marks) raged about how incomprehensible the doc set was. And who can blame them!
This was obviously a terrible user experience. In addition, we were overestimating and misestimating the problems with developer docs. Meanwhile the end user writers were not seeing feedback that might have helped them (such as the enormous number of users looking for info on how to change their password). Also, some of the feedback on the end user docs was that the docs were too wordy and difficult to understand, which may have been an impression left over from when they landed in the wrong doc set.
Poking around some more, I realized that the problem was also skewing our web page metrics. We were using page hits to decide what to translate (and other important decisions), but those stats were massively skewed by the misdirected users. Some docs that had looked really, really popular were just docs that contained end user terminology.
A second thing I noticed was that some readers were hitting the feedback button repeatedly: some as many as six times. I was able to tell this by sorting by time and comparing the messages they left. I estimated that about 25% of all comments were repeats, and that typically a "repeater" would post three or four comments at once. To do this they had to leave the page and come back (otherwise the feedback mechanism was removed after a comment was left), but they were taking the trouble to do that.
We called our feedback metrics "topic level feedback", but I also figured out that that was a misnomer. Our publishing process grouped everything under an H2 on the same HTML page, so most pages had multiple topics. If the reader clicked the title of a topic before clicking Feedback, the feedback was logged for the topic. Otherwise the feedback was logged for the top-level topic or whatever they clicked in the left navbar. Frequently topics on a page had different authors. We should have been gathering all the data for an HTML page together, but nobody had thought through this topic/page confusion, so our reports were misassigning feedback to specific topics and authors.
Another problem with our feedback data was that we got very little of it, even including the Yes/No (to the question "Was this useful?) without a comment. Writers and editors often used the feedback mechanism (anonymously), and the low amount of data meant that they skewed the results.
Because of all these issues, our reports of feedback metrics were highly misleading. For example, the topics that had the most negative feedback were the ones that end users were most likely to be misdirected to by google.
Once we understood the problem, we took steps to deal with the misdirection and so on. But the feedback mechanism had been in place for some time by then, confusing all and sundry.
When dealing with data, you really have to roll up your sleeves, dive in, and get familiar with the details to see what's really going on.
No comments:
Post a Comment