Friday, September 28, 2012

The True Costs of DITA Adoption

For anyone who disagrees with this post: please leave a comment or send me an email. If I am incorrect about something, I will modify the post so that I am not spreading misinformation. And I would love to have a dialog on the topic.

There are many articles that advise doc managers about how to calculate return on investment for potential DITA adoption. Most of these articles seem to be written by consultants who make money by helping companies set up DITA systems: they have a vested interest in making DITA look beneficial. Also, they tend to help out during the initial migration and might not be around when some of the costs kick in: they simply might not be aware. Finally, they might deal mostly with large companies where large expenditures do not seem excessive. For whatever reasons, the literature seems to be underestimating the true cost of moving to DITA.

There can be a lot of costs related to DITA adoption. Here are some that might affect you. (Different implementations will vary somewhat.)

Most of us know about the cost of a CMS, which can set you back over $250,000 (or might be a lot less). You can do without the CMS, but DITA is designed for use with a CMS and you need one to get the full benefits. But the CMS is just the start.

In the early stages of your migration to DITA, you will likely need to hire DITA specialists (the consultants I mention above) to help you plan and set up your system.

You'll likely need more inhouse tools developers, and you may need developers with different skills than you currently have. This is not just to set up the new publishing system and so on, but also to troubleshoot publishing problems, adapt to new browser versions, and address all the bugs and glitches. In my experience there are all sorts of problems that crop up with the relational database (CMS), and also with the ditamaps, formatting of outputs, and many other things. Part of the problem is that the DITA Open Toolkit is notoriously difficult (and could require extra expense for things like a rendering engine). Some of the tools designed to work with DITA are arguably not quite up to speed yet. Your tools developers will also spend a lot more time helping writers.

(If you don't move to a CMS, but use something like FrameMaker and WebWorks ePublisher, you may find that you have a lot more headaches in producing docs without much in the way of DITA benefits.)

You need extremely skilled information architects to create a reuse strategy, engage in typing exercises, and design and edit your ditamaps. This isn't a skill set that people currently in your organization can easily acquire. Even most information architects have trouble adequately mapping topics. For a discussion of the sorts of challenges they'll face, see this series of posts; the moral is: if you don't have skilled architects working on your system, you may end up with Frankenbooks that are not particularly useful for your readers. I raise some additional topic reuse problems in an earlier post: link.

You'll need to spend significant time developing new processes, policies, and internal instruction manuals.

Your team will have to undergo intensive training. In coming years, new writers you hire will also need training. I have found that writers can move to Docbook XML with very little training, but DITA requires a great deal of training, not just for the CMS, but also to learn how to use ditamaps, reltables, and so on.

The migration of content will likely be quite time consuming, with manual work required to correct tagging that doesn't convert automatically, mapping, and a complete indepth edit.

Your writers will need to spend more time on non-writing activities. This can greatly reduce their productivity. Working with a relational database, especially an opaque one like a CMS, is much more time consuming than checking files out of a version control system. Creating reltables is a lot more work than adding links. Coordinating topics is a lot more work than designing and writing a standalone deliverable. Plus, there is a lot more bureaucracy associated with DITA workflows.

With most DITA implementations, topics exist in a workflow that only starts with the writer. You'll probably need more editors and more software.

You'll also probably need more supervisors. The DITA literature emphasizes the importance of assimilating writers to the new regime and then monitoring their attitudes. Pre-DITA, writers were project managers for their own content; with DITA they have to learn to hand that responsibility off to others.

There are some organizations, such as ones that have to cope with hundreds of versions of a hardware product, that have a clear ROI for DITA. But many (most?) organizations could find that DITA doesn't so much save money as redistribute money. Where before you spent the lion's share of your doc budget on salaries for writers, now writer salaries will be a much smaller proportion of your budget. In many cases, companies could find themselves facing higher costs than pre-adoption: they will never see return on their investment. And given the complexities of using DITA, ongoing hassles and escalating costs, some companies are going to find themselves having to ditch DITA and go through an expensive migration to another system.

Wednesday, September 26, 2012

Introducing "Information Designers Waterloo"

Mark Pepper and I created a LinkedIn group today: Information Designers Waterloo.

It's a local group for technical writers, information architects, editors, instructional designers, doc managers, doc tools developers, and all else who are interested in discussing tech writing issues, networking, job info, and... whatever this turns into.

I'm hoping we can have some lively discussions and some local meet-ups.

So if you're in the Waterloo area and if you're in the biz, please join up!

Monday, September 24, 2012

Think ecosystem

(This is part of my series on what I learned at the Fluxible conference over the weekend.)

Michal Levin of Google talked about the growing interactivity of devices, and what that means for app development. She said...

In our new era of connectivity, we'll increasingly be designing apps not for a phone or a PC, but for an ecosystem of device types (phone, tablet, PC, TV and... who knows). Here are three new design models:
  • Consistent design - For apps like Trulia, which have versions for the PC, tablet and phone. All should have a similar IA, navigation model, and look-and-feel.
  • Complementary design - For apps where devices control or influence each other. For example, with KC Dashboard you use a phone to throw darts at a dart board on a tablet. Some other examples interact with a TV: Real Racing 2, Slingbox, Heinekin Star Player.
  • Continuous design - When you start a process on one device and then move to others. For example, All Recipes starts on a web site (probably viewed on a PC) where users choose recipes for a meal. The web site downloads a shopping list to a phone, and downloads recipe instructions to a tablet.
The ecosystem approach is just getting started, but we should all be aware of it and plan for it. For example:
  • If you are creating an app for a single device type, you should consider ways to make it scalable and flexible for the future.
  • You may need to use cloud computing for the data sync required by multi-device coordination.
  • Some companies should consider changing their corporate structure so that development for phones, tablets and PCs is not done in separate divisions.
  • The ecosystem concept will expand as computer chips become ubiquitous in home appliances; as their use in cars improves; etc.
In his closing keynote, Dan Gardenfors of TAT picked up on Michal's idea of future extensions to our online world. He envisioned a world of public computing where screens are everywhere (on buildings, bus shelters, bus windows, etc). One of the uses of these public screens could be to have the ability to screen whatever we are watching on our phone. Other uses will be advertising and public information. He suggested we start thinking in terms of a mobile computing platform rather than a platform for mobile devices.

The future of IA... or what to do with all that content

(This is part of my series on what I learned at the Fluxible conference over the weekend.)

Karl Fast, a prof at Kent State, talked about the huge load of information that is available online and how we can cope with it. He doesn't have a solution, but suggested themes that information architects might pursue towards finding a solution. He said...

Our current ways for processing all this information are:

Intentional structure, such as a library or folders
Algorithms plus computation (the Google approach)
Loosely coordinated group actions (the Wikipedia approach)

Information is cheap but understanding is expensive - and we have not figured it out. IA is about understanding the world as it may be, and a fundamentally new way of processing information is needed but might not happen for a long time. In working towards that new understanding, he has come up with some themes that might prove fruitful. They seek to understand how people work with content. They all fit into the idea of cognition as extension. For example, when we do a jigsaw puzzle we look at the puzzle, using just our brain, but we also move the pieces with our hands. We combine pragmatic action and epistemic action.

These are the themes he thinks might be fruitful in moving to a better way to process content:

Deep interaction - He argues that our mental model is wrong: we think cognition is all in the brain, but it's really part brain, part external: gestures, inflection, content artefacts, people, devices, etc. (When we do usability testing we usually ignore a lot of really important stuff like gestures.) Scientists have discovered that we don't just use our brains to think: thinking is a mind-body integration. For example, gestures are so important that we don't think as clearly if we don't gesture. Another example: this study of Tetris users concluded that thinking is not all in the brain.

Coordination/orchestration - We coordinate a variety of online devices, books, a whole bunch of things. It's not as simple as processing one thing. We have multiple PCs, tablets, smartphones, etc., and also have non-electronic content to coordinate. To figure out how to process the vast new amount of info we face, we should think of PHINGS: Physical, haptic, interactive and information-rich, networked and next-generation, stuff on surfaces. These refer to:

Physical: not just the brain, but also the body and physical things
Haptic: non-verbal communication
Interactive and information-rich
Networked and next-generation
Stuff on surfaces: on screens or paper

Mess - We tend to treat mess as an a bad thing to be avoided, but mess is necessary and should be worked into our theory. "Mess is the soul of creativity." Messy desks, messy desktops, messy bookshelves. Mess is a fundamental part of reality and we need a way of describing it. The idea of mess is completely tied up in the idea of deep interaction and PHINGS. Mess reflects our reality.

While talking about mess, Fast told an anecdote about Steve Jobs. Jobs liked to popularize a myth that he lived a stripped-down, simple life, and played up a time when he had no living room furniture... but this is what his home office looked like:

All this is a bit difficult to comprehend, especially as it applies to a way to process online content. I was fascinated but not exactly led to enlightenment. I am not sure I have captured his meaning perfectly, but I'm going to keep Karl Fast on my watch list. You can find his published papers here: Medeley Profiles.

Update: Karl Fast recently left Kent State to become an information architect at Normative Design.

Apps suck

(I was at a user experience conference called Fluxible over the weekend. This post describes one of the talks I attended, and is part of a series.)

James Wu, lead designer for tablets at Kobo, gave a talk called "Rethinking tablet UX". His central insight is that users are interested in content, not apps. He said...

Technology sucks for most people. Unlike most of us at Fluxible, most people don't want to know, understand or learn the techie features of their devices. They don't like having their main tablet navigation be small impersonal icons that represent all their apps. They hate it that their content is stored within apps. They're interested in content: in their movies, music, pictures, books.

This is what people want to do with their tablet:
  • Find content
  • Organize content
  • Consume content
At Kobo, James has been involved in the development of a new tablet look called Tapestries, which is a way to let users focus on content. He calls this "organic curation". Users can create a tapestry, multiple tapestries, and sub-tapestries. The full sample he showed us was a page a woman might create to plan her wedding. It had lots of big pictures.
- - -
I wish he had also shown us something more meaty, like a tapestry I might use to store research on something.

In fact, for my personal use, I'm not convinced that Tapestries is the best application of this idea. But the idea is a winner. I hate apps. I hate having four screens of the stupid things on my phone. I hate having to remember which eReader has which books. I think content-centric UI design is/should be the next wave.

Sunday, September 23, 2012

Case study: LinkedIn web site fails?

It's astounding how an important, established web site like LinkedIn could have such utterly crap design on its Groups page. Using "More" as a category is always an unfortunate design decision (although sometimes necessary), but on this page, LinkedIn uses the "More" category three times (as marked in red).

I belong to a lot of LinkedIn groups, and I only ever click one setting on the whole page (other than group links). That option is in the second "More" listing and it's "Settings"...
... and the only thing I ever do there is stop LinkedIn from its default behavior of sending me endless emails about discussions in the group.

The kicker: the third "More" (in this view) doesn't even do anything.

Sunday, September 16, 2012

Musing about topic reuse

I first started writing in a dot command language called Script. Later I used TROFF and then LaTeX. But eventually the WYSIWYG editor was born (I had mixed feelings about it at first), desk top publishing applications and laser printers appeared, and Macs hit the market - and typefaces hit the world with a bang.

Tech writers, wanting to show off what they could do, started using typefaces like there was no tomorrow. Some manuals were so busy that it seemed like every word was italic, bold, colored, or in a different font altogether. They were hard to read.

(We still overdo typefaces to a certain extent. I would like to use bold only to designate words that I want to "pop" off the page, and not for UI controls and so on... but that appears to be a battle I have lost.)

Nowadays I wonder if topic reuse is a bit like those heady days of typefaces. When we say, "my doc set has topic reuse of 30%," that doesn't mean "my doc set needs to have topic reuse of 30%." There is a sense of "Topic Reuse Good, No Topic Reuse Bad." There is a need to justify spending a lot of money for tools like CMSs that facilitate topic reuse.

I have also noticed a tendency among some writers to pad out their deliverables with other people's topics when it isn't really helping the reader. When working in large writing departments, I have found my topics in odd places where a link would be more useful. In one instance I took a deliverable that was 50 pages in PDF form and deleted the reused topics, creating a much more focused, useful doc that fit on one HTML page. The reused topics in that example were actually harmful because they were pulled out of context. I know, topic-based writing isn't supposed to have context, but in many cases it does, especially with complex subjects.

The problem is that writers are given latitude - and are even pressured - to reuse topics when there is no clearly defined reuse strategy. In fact, I have never seen a well-articulated content reuse strategy. You see descriptions of the mechanics of reuse, like this one or this one, but they don't provide guidance on why to reuse topics and which topics to reuse.

Sometimes topic reuse makes no sense, like a doc set I once saw that repeated a large set of introductory topics at the beginning of every deliverable, which to my mind just clogged up the user experience. Even worse, a decision was made to include the topics in the HTML and not the PDF - the reasoning being that the topics weren't really needed and would add to printing costs - which confused readers about whether the two outputs were two different sets of content.

Sometimes topic reuse strategies are ill-considered, such as trying to use doc topics in training material. Docs and training require such different styles of writing that that can result in really bad output, and only seems to make sense when cost-saving requires highly compromised training materials. (Which, in that case, is fine, as tech writing is necessarily all about doing the best we can with available resources.)

Sometimes topic reuse becomes a sort of mania, an end-in-itself. I once saw a DITA topic that single-sourced a table that described fields in a screenshot. There were three versions of the screenshot, each completely different UI screens, each marked with attributes for different products. In the table, there were two or three rows that were not conditionalized. There were about a dozen other rows that each appeared three times, marked with attributes for the three different products. Updating the table was a nightmare, as you might imagine.

This is not to say that topic reuse is not useful. Anyone who has had to modify the same text in two places knows how important topic reuse is. But I have never documented hardware, so I have never worked on a doc set that required a lot of topic reuse. Consequently, approaches like this one, in a department where 48 writers produce 14,000 publications, were not even remotely applicable. (I have worked in departments with that many writers, but never with even one percent of that many publications. It is my contention that that is not the norm.)

My preferred approach would be to reuse topics when previously you would have cut and pasted the content, and otherwise a writer would be required to make a case for reusing a topic. My reasoning is that we must never force the reader to read anything more than they need to read. (Unfortunately, minimalism is often thought of in terms of our convenience and costs rather than in terms of reader usability, as it should be.)

We are in danger of reusing too much because reuse is easy and because there are non-reader incentives to reuse, as described above. The problem with my approach is that it could keep the stats on reuse low, which wouldn't help with proving ROI for the CMS or other tools that were bought with reuse as a justification. But it would help avoid a tendency to go hog-wild and reuse when it's detrimental to readers.

Saturday, September 15, 2012

We need a frank, open discussion about the problems with DITA adoption

My second post on this blog was Case study: DITA topic architecture, in which I described some problems I inherited (twice) with DITA topic architecture.

Thanks to Mark Baker, author of Every Page is Page One, the post was widely read. (He referenced it on his blog and also tweeted it.) The post got hundreds of page hits and generated several comments and a few emails. It also spawned a somewhat defensive thread on an OASIS forum.

I have a lot to say about DITA. I have been holding back because I was concerned that my new blog would be written off as a DITA-bashing forum. I have a lot of other, less controversial (or differently controversial) things to say, and I didn't want to turn off a whole section of the tech writing community even before anyone knew who I was. But it seems that despite my best intentions I have been branded an antiditastablishmentarian. :-) So here I go...

I think it's time that we have a frank and open discussion of the pro's and cons of DITA. For years now all discussion of DITA has been dominated by its proponents; we have heard plenty of arguments for why to adopt it. We need an open discussion not to bash DITA, but to uncover issues so that we can address them.

Here are just a few of the issues I want to address:
  • Has DITA changed tech writing output? Is there a discernible style to docs created in DITA? If so, is this what we want - and how can we change it?
  • How has DITA changed the work environment for writers? Do writers have less control over their content in DITA shops? What is the effect of that on quality?
  • How has XML/CMS adoption affected the creative process for writers?
  • What is the culture of DITA, and how widespread is it? Has the emphasis on monitoring writer's attitudes towards DITA changed the culture of tech writing?
  • How much is DITA really costing companies, when you include the need for enhanced tools teams and information architects, CMSs, and more time spent by writers on non-writing activities?
  • DITA proponents make claims about the cost of non-DITA solutions, such as that writers spend 30-50% of their time formatting. How true are these claims?
  • Has the rise of DITA increased the influence of consultants on tech writing? How has the agenda of consultants (to attract business) changed our profession?

These issues are of immediate, practical interest to me. I lead a doc team that uses DITA. I have authored in XML for 12 years. I have been a judge in the international STC competition (that judges the highest scored winners of the local competitions) for over a decade, giving me a chance to see the trends in our profession.

To the DITA proponents, I want to say that there is more that unites us than divides us, and to let you know that my goal is always to eventually reach common ground. My other, much longer-running blog is largely about politics so I have experience with this approach. I hope some of you will stick around to duke it out so that we can reach some consensus.

My bottom line is: I think there are some things to be concerned about with the widespread adoption of DITA, and we can't fix them if we don't acknowledge them. Let's dive in and see where the discussion takes us.

Friday, September 14, 2012

The messy side of metrics

As I have said before, I'm a big fan of diving into the weeds and figuring out data. Sometimes this is a lot of hard work with very little to show for it. Every once in a while it has spectacular results.

I once worked at a company that had an unsearchable knowledge base. I won't go into the technical details, but it was difficult to collate the data in it. Consequently there had been very little reporting of customer problems.

I decided to investigate the knowledge base. I was able to output a massive text file of all support calls logged over the previous 18 months. I went through it, cutting and pasting issues into other documents, which was about the crudest way imaginable to collate electronic data. It seemed to take forever, but in reality it took about 24 hours of heads-down concentration.

I discovered a number of things, but the most dramatic was that a single error message was responsible for a high percentage of support calls. It was relatively easy to write documentation to help users who got the error so that they didn't need to call customer support. (I had to write a spec for a compiler to attach the doc to the error message, but that is another story.) Afterwards the number of customer support calls fell dramatically.

It's sort of embarrassing to admit to the lack of technical savvy in my approach, but I think that's what makes the story worth telling. There isn't always a nifty technical solution for data analysis. Saying "It can't be done" often just means, "There's no easy way to do it." Also, when you handle data manually you notice things that wouldn't be apparent in an automatically generated report.

Thursday, September 13, 2012

Case Study: Metrics (feedback button data)

I once conducted a project to analyse the user feedback that my then-employer received from the feedback button on help pages. We had all been using the feedback to determine which topics needed attention and to understand doc usage. But I quickly saw that there were some pretty serious problems with the data.

If there was a bus to the future, would you get on it?

I would.

(This question was posed to me by my brother the other day... the sort of stuff siblings talk about over email while at work.)

Cautionary Tales: Metrics (performance measurement)

(I'm a big fan of IT people. Also, nobody likes measuring things more than me. But interpretation is everything. Or rather, a little knowledge is a dangerous thing.)

I once worked in a company where an IT team of three people provided technical support to about 1,000 employees. One of the IT guys was great and the other two were really bad. By bad, I mean that when an employee asked them to fix a problem on their PC, these two guys typically couldn't fix the problem and often also created new problems.

Then one day the good IT guy got fired. We learned later that the company had instituted metrics to measure performance of the IT department, and had determined that this fellow was too slow: metrics showed that it took him two or three times longer, on average, to close a case.

What everyone in the company knew (except, apparently, IT management), was that the good IT guy took all the complex and difficult problems, while the other two guys dealt with the sort of mundane issues they could handle. Difficult problems take longer.

I might have just passed this off as incredible stupidity, but the next company I went to also had three people in the IT department. They were extremely overworked (causing delays that affected productivity all over the company), but one day one of them got laid off. We learned later that the company had instituted metrics to measure performance of the IT department, and the metrics showed that there wasn't enough work for three. The metrics system only covered issues that were logged through a web site, but employees mostly just called IT without logging an issue.

Friday, September 7, 2012

Not all docs are public

Here's a quote from an interview in this week's CIDM Information Management. It's an intelligent and literate quote but I'm going to take exception to it anyway: "I think that Google changed our world as technical communicators. All of a sudden you could type in a series of random words and get a very close match to what you were looking for. That makes traditional informational organization obsolete: the back-of-the-book index and the table of contents go out the window. People want their little sound bite that corresponds with their one question that they have right then. We can no longer provide big fat manuals." (link: Tracy Baker Shares 20 years of Tech Comm Experience)

I have heard the sentiment several times lately that Google has made traditional information organization obsolete. The problem is: of the ten companies that have employed me as a technical writer, only two produced documentation that was publicly available on the internet. At my current job, my docs are not public.

When my docs have been available through Google, readers tended to use Google at the start of their doc search. Google is undeniably the best route, and can lead to a slew of information sources.

Monday, September 3, 2012

Skipping the steps

I was updating some end user docs today that were task-heavy, and after a while could barely bring myself to read through the highly repetitive, obvious steps.

It dawned on me that none of the procedures were doing much to help the user. The user needs to know:

  • That they can do things.
  • Where to do them.

After that, they can figure it out. (I mean, this is really easy stuff.)

It's a rule of thumb of tech writing that you don't document wizards. Some wizards may require help buttons on the screens, but no wizard should require user manual content.

Why not, I thought, use the same rule of thumb for simple tasks? So I happily ditched a lot of carefully written numbered steps and replaced them with much briefer sections per task - sometimes just a sentence.

I did this as a provisional approach until I get some feedback. But quite by accident tonight I stumbled on a post in the blog Gryphon Mountain Journal that presents research that comes to the same conclusion (Project Pinnacle, Episode 4: Rethinking What the Users Need).

I love excising useless bulk!

Sunday, September 2, 2012

Challenging misconceptions about readers

A problem many employees have is to make incorrect assumptions about their customers.

You can start to understand misconceptions in your writing department in the following ways:

  • Research the writers.
  • Work backwards from defects in the docs.
  • Understand the types of misconceptions that typically occur.

The most common and pernicious assumption that writers make about customers is: "The customer is just like me." Often this assumption is a stand-in due to a void of knowledge. Sometimes it is more ingrained than that.

Another very common misconception is that customers are more focused on the software product than they actually are. In most cases, customers use a range of software products during the day, each with its own terminology, metaphors and processes. They might only use your product once a week or month, in which case they might not remember details between uses. Few of them will be expert users of the documentation, so you shouldn't expect them to remember a caveat you posted in the introduction or a definition you stuck in a glossary.

Another common misconception occurs at companies where developers provide customer support when problems are too difficult for support staff. The side-effect of this activity is that the developers start to think of the problems they solve as the norm and consequently over-emphasize the edge cases. They pass this bias on to writers, resulting in content that confuses most readers.

User research has two parts: the research and the dissemination of findings. Understanding misconceptions is especially important to the latter part. For example, personas should be designed to provide writers with user characteristics, but should also attempt to correct false assumptions. The pernicious thing about misconceptions is that they are internalized; frequently people don't even know that they're making the assumption. We don't always have to point out that their assumptions are in error; often it is best to just provide better information.

Saturday, September 1, 2012

Personas should be prescriptive not descriptive

There are lots of things that R&D departments should be doing to collect information about customers, including:

  • Direct methods: large-scale surveys, usability testing, round-table discussions, focus groups, interviews, in-situ observation, ethnographies.
  • Indirect methods: surveying sales and support staff, mining the support knowledge base, collecting web usage metrics, collecting data on searches.
There are lots of ways that R&D departments should be helping their staff learn more about customers, including:
  • Presentations, lunch 'n learns, written reports disseminating research findings.
  • Metric dashboards that employees can use to track customer responses.
  • Programs that let employees listen in on customer support calls.
  • Poster campaigns.
  • This list could go on and on.

Personas are just one way, among many, to disseminate customer information to employees. The thing about personas that many people don't get is that personas are fundamentally different from every other way to educate employees. Other methods present info and require interpretation. With personas, all that is done for the employee: there is no (or at least little) interpretation.

A small set of personas is going to be used as the stand-in for the entire universe of customers when employees make decisions about product design, implementation and documentation. They are a powerful tool that can fundamentally change outcomes.

Consequently, personas have to be prescriptive rather than descriptive.

Whenever you talk about personas with information architects, the first thing they'll say is "Personas must be based in research! They're garbage otherwise." But that is really missing the point. Yes, we should not just make stuff up about customers. We must have a solid foundation of real world knowledge. But NO, personas should not be derived from research.

Here's an example where the universe of users differs from targeted users: A company has APIs that app developers are using to develop games. Game developers make up most of the user base. But the APIs aren't fully functional for good game development, and the company doesn't have the resources to beef them up. The APIs are really only useful for porting games from other platforms. The documentation needs to be very different for porting games than for developing them from scratch.

Here's an example of personas that have no relation to the real customer base (I believe this comes from Alan Cooper's The Inmates are Running the Asylum, but I can't find my copy to verify that): When airlines were first developing those TVs that are on the back of seats, they used two personas: a young boy and an old woman. They deliberately chose edge cases because they wanted to be sure that the UI and controls would be simple enough that every passenger could use them. The personas are far from the average customer. The average customer is probably a 35 year old white male who is an expert user of complicated gadgets. Had those personas been based on customer research, airplane TVs would have many more features and be much more difficult to use.

Once we accept that personas should be prescriptive, then we can see some implications for how we should be developing them:

  • Personas should not be created by researchers. They should be created by, or at least under the direction of, product managers and senior management.
  • Research should be part of the persona development process, but any description of users should be checked against the question: And is this who we want to be developing for?
  • Research will be useful in filling in "color" details of the personas.
  • Personas should be considered temporary constructs that are useful for limited time periods. Customer targeting can change frequently, so personas should be tweaked for every product release cycle.

I foresee the other comment I always hear from information architects about personas, and that is: "You can't confuse development personas with marketing personas!" I'm not.

In my experience most attempts at personas fail, and I think the reason they fail is that they are descriptive. They are created by low-level employees - tech writers, researchers, information architects - who think they can derive personas from customer data.

I was starting to write this post a few days ago when I wrote my musings about visionaries and the lack of vision in software development organizations. We need a lot more vision in our companies, and a lot more dissemination of vision to writers and developers. The failure of persona development in many companies is just part of a larger failure of vision.

Note: My thinking about personas is markedly different from my usual approach to customer research. In most areas I favor a grassroots approach in which writers perform research themselves. Direct contact is always more pithy than only reading a report.