Thursday, December 6, 2012

Building an onion

When I approach a doc set that needs a lot of work, I think of my job as building an onion. That is, creating successive layers of quality. As I build my layers I learn the doc set and the product, making the next stages possible.

The first layer is typically the result of a copy edit: remove typos and grammatical errors.

Next, work on consistency. Start to revise terminology and wording conventions (and build up a style guide). Ensure the writing is active voice, second person, present tense. Read the doc critically, thinking about the suitability of the content for the target reader.

At this stage research is required: talk to managers, product managers, developers, customer support, and customers. Look at how competitors handle similar areas. Read Wikipedia articles for background. Read the standards that your product is based on. Read analyst research for your market. And so on.

Revisit your terminology decisions, and start to formulate a clear picture of your target readers.

Create a list of problem areas in the doc set. Prioritize them.

Sunday, November 18, 2012

Continuous improvement

In my many years as a tech writer, I have seen a lot of truly awful docs. The reason was almost always the same: the doc was needed in a rush, and then nobody ever went back to make it better.

Even with well-planned, carefully written and edited docs, there should always be a process of improvement. All too often there isn't. Worse, groups that have CMSs often have highly-prescribed workflows that completely omit the need to tweak the doc after it is released. The assumption is that once released, the doc is finished. Perhaps they also worry that the company will incur higher translation costs if writers are allowed to polish published topics. But the way I see that kind of workflow is this: the workflow accounts for everything but quality. If it's a highly-prescribed workflow, you have pretty well ensured that you won't get quality.

Why should docs be revisited and continually improved? Many reasons.

Writers are continually learning new things about the product, technology, users, and how everything fits together. Frequently, we don't really get it when we write the first version of a doc, even though we think we do at the time.

Also, things change. When you wrote that you recommend Flash v9, that was correct, but now Flash v11 is out, and you need to change the wording to "Flash v9 and later" - or whatever is the case. When you explained what a Ribbon is in a UI, ribbons were new, but now we've all migrated to Office 2007 or later and we don't need those long explanations in every task (are you listening, Madcap?). When your product first came out, Product Management wanted to focus on a certain module, but now the focus has changed. And so on.

There are always errors. It's inevitable. We can't ever assume that the docs are absolutely correct and complete - we should periodically be checking them, and preferably do a thorough review every so often. Readers notice when the same error persists in release after release, and it doesn't give them much faith in your company.

Finally, it is difficult to be fully objective while writing something. Going back with a fresh eye will always result in seeing things that could be said better.

There is a sentiment that I see in a lot of doc managers, writers and blogs that quality isn't all that important. All too often, doc departments prioritize things that don't matter one iota to their readers. But content is king: helpful content and good enough navigation that readers can find the content.

Wednesday, October 10, 2012

DITA and the future of tech writing

This post is part of a series of posts that question some of the claims made about the benefits of DITA adoption.

DITA is designed to work with a CMS to create a fully structured tech writing environment. In a full DITA implementation, the process of creating technical documentation is fundamentally different from what is done in a traditional writing department. There are so many variations of tech writing processes that it's impossible to describe either the non-DITA or DITA structure accurately, but (with some trepidation), I'll take a stab at it...

In a traditional setup, at least for documentation that requires a fair amount of specialized knowledge, the majority of members of the doc department are writers. Typically, each writer researches, designs and writes one or more deliverables (and is effectively the project manager for the deliverable). There may be an editor, or the group may rely on peer reviews. There is a manager/team lead, but often the management style is quite flat, with writers making a lot of the decisions on things like style guidelines, user research, and priorities. In a high-functioning team writers are active in the development teams they work with, adding to terminology decisions and usability, as well as editing resource strings. The doc department may have a small tools team, or a writer may do double duty on tools maintenance.

By contrast, a DITA implementation is supposed to be more like building a house: writers create the bricks, but other specialists design the house and build it. Writers create small, structured, reusable modules of content. Architects create templates for the modules, and possibly also oversee information mapping of the modules. Map editors create maps that use the modules to produce deliverables. Editors enforce consistency. A team of tools developers maintains the complicated software required by the process. Team leads or architects act as project managers.

Writers must accept that they must spend a higher percentage of their time on tools and bureaucracy than they did in the traditional doc setup. They must also accept that they have much less control over the final output. This fundamental change often results in writers being unhappy about working in DITA, and the DITA literature goes on about how writers must be assimilated, how failures to return on investment are usually caused by writers having bad attitudes. But stop a moment: When your employees balk at a change, shouldn't you respect their instincts? Unless part of your business model for a DITA transition is that you want to reduce quality for readers, you should at least listen to the people who are responsible for creating that quality.

Instead, DITA proponents say that writers must shape up or change careers. I have heard it put as baldly as that: DITA is sweeping the tech writing field and writers can no longer see themselves as project managers for readers. They are now simply a cog in a wheel. If they don't like it they won't get hired: they'll have to find a new line of work. The real tragedy of this attitude is that the writers who balk at losing their responsibility are the high quality, senior ones who are passionate about their readers and have a professional attitude about how they work. Crappy writers will be perfectly happy assimilating to less responsibility. (They might be less happy when they realize that the transition to structured writing means that it will be much easier to ship jobs off-shore.)

DITA is a beautiful solution... if you're trying to document the parts for an airplane. It would also be suitable if you're documenting 50 similar products, each with end user docs that overlap. My problem with DITA is that it has been sold as a general purpose doc solution. DITA advocates went too far in extolling the virtues of DITA, such as saying that any company that translates content should adopt it.

When people complain about DITA, DITA proponents like to say that it's just a tool: if you don't like the meal, don't blame the knife. But DITA is much more than a tool. It's a tool developed to be used in a particular way, and there's no sense adopting it unless you also adopt the system of structured writing it was created for. The literature about DITA has also created a culture - such as the emphasis on assimilating writers - that permeates many organizations that adopt DITA. And the way DITA is meant to be used, creating reusable modules of content, creates a tendency for doc deliverables to have a certain look and feel. (More on that in another post.)

In fact, DITA is having a profound effect on all aspects of technical writing: on the way we work, the productivity of doc departments, our job responsibilities, and the quality of our output. I know that some call what I'm doing "DITA bashing", but we are past due for a deep reflection on the pro's, cons, and appropriate use cases for DITA.

Tuesday, October 2, 2012

DITA ROI: Are translation savings all they seem?

This post is part of a series of posts that question some of the claims made about the benefits of DITA adoption. This post focuses on savings in translation costs.

Articles about DITA ROI make some rather sweeping claims about the money you can save by adopting DITA. One prominent DITA proponent writes, "If you have localization in your workflow, you can probably justify the cost of DITA implementation." I would argue that that claim is false: that most companies that localize their content would never recoup the costs of a full DITA/CMS implementation, and that DITA makes sense mostly in fairly extreme cases such as hardware documentation where there are hundreds of similar versions to be documented.

There are two main claims for translation savings with DITA: topic reuse and post-translation DTP costs.

Topic reuse
First, DITA is supposed to save you money because you can reuse topics. "Write once, use frequently" means that a topic is only translated once. Big savings, right?

Maybe yes, maybe no. Translators use Translation Memory. TM is very sophisticated: each sentence is read into memory, and each sentence is flagged if it is an identical or fuzzy match to a sentence before it. If you repeat a sentence, TM will ensure that it is only translated once.

There is still a cost for processing a 100% match, but it is minimal. Typically, the cost for identical repetitions is 15% to 30% of the cost of new translation.

What this all means is that if currently 10% of your topics are duplicates of other topics, your translation costs are higher by 1.5-3% than if you reused topics.

Note: You can get some additional savings from DITA with a CMS by transforming your ditamaps into an interchange format called XLIFF before sending them to the translator. This is a pretty complicated procedure; have a look a this link to see if your organization can handle it. (And I remian somewhat confused about XLIFF: my friend who runs a large translation company says, "Since our CAT tool can handle XML directly, it’s not necessary to go through the migration process into .xliff format.")

Keep in mind that the savings from topic reuse only apply to topics that you are currently maintaining in duplicate places. If you decide to start reusing other topics in more places, that could arguably improve your quality, but it does not improve your ROI. (Plus, I argued in another post that the reuse following DITA adoption is often actually harmful to reader usability: link)

It is true that translation costs rise for reused text when it gets out of sync - when different locations are updated differently. It is always a good idea before sending things for translation to spend some time preparing the files; syncing duplicate content should be part of that check, when it occurs. But even when translators get different versions of dupes, they charge less for fuzzy matches, so the price is not the same as translating the section twice.

My point here is about ROI, not how to write. I am not arguing that cutting and pasting content is good practice. But for many writing teams there is not so much duplication that there's any problem keeping up with it, and if there is, then there are many other systems that provide excellent mechanisms for reusing topics, including Docbook XML, other forms of XML, and Madcap Flare. An extremely expensive full-blown DITA implementation with a CMS is not the only way to reuse topics - and for many organizations, it is not the best. (More on that in a later post.)

Post-translation DTP costs
DITA is supposed to save you money because in other systems, work has to be done after translation. One prominent DITA proponent claims, "Typically, 30–50 percent of total localization cost in a traditional workflow is for desktop publishing. That is, after the files are translated from English into the target language, there is work to be done to accommodate text expansion and pagination changes."

This is a valid point, except that it doesn't state its assumption that you are using bad practices. When you start to localize your DTP content you should remove manual formatting and rely on styles instead. In addition, you can't use formatting that will cause problems in languages that have longer words or are more verbose. This means: stop adding manual page breaks, stop using format overrides (FrameMaker 10 provides an easy way to find and remove overrides), stop putting section headers in the margin, stop setting manual cell heights in tables, stop using forced line breaks (Shift-Enter).

These practices will hugely reduce the post-translation DTP costs (certainly to way less than the stated 30-50%, although there is still a per-page DTP fee). When we talk about the advantages of DITA, we assume people are using good practices; we shouldn't assume that the alternatives are created with bad practices.

Conclusion
Articles about DITA ROI often give you rules of thumb to use in your calculations. Their claims are almost always based on an unstated assumption that your current authoring environment is the most inefficient one possible, and even then their claims can be over the top. It is prudent to ignore this advice and instead go to your translation vendor to find out what your cost savings might be. I have become friendly with the managing director of a translation vendor I once worked with, and he assures me that translation cost is virtually the same when the source is DITA, Docbook, other forms of XML, Flare's XHTML, HTML, etc.

I have spoken with doc teams who are planning to move from Docbook XML to DITA simply because they are confused by these DITA ROI articles and think that the massive translation savings will apply to them. This is not a trivial issue. DITA proponents should be much more precise in the claims they make about DITA cost savings, and doc departments should be much better educated before jumping on the DITA bandwagon.

Note: I'm uneasy about quoting individuals. It isn't fair to single out any particular DITA proponents on how they justify DITA ROI, as many DITA proponents are saying similar things. In addition, I don't mean to impugn the motivations of anyone.

Update: I have a growing unease about quoting people and then knocking down what they say. I have now removed links to DITA proponents I quote. In later posts, I may even stop quoting.

Friday, September 28, 2012

The True Costs of DITA Adoption

For anyone who disagrees with this post: please leave a comment or send me an email. If I am incorrect about something, I will modify the post so that I am not spreading misinformation. And I would love to have a dialog on the topic.

There are many articles that advise doc managers about how to calculate return on investment for potential DITA adoption. Most of these articles seem to be written by consultants who make money by helping companies set up DITA systems: they have a vested interest in making DITA look beneficial. Also, they tend to help out during the initial migration and might not be around when some of the costs kick in: they simply might not be aware. Finally, they might deal mostly with large companies where large expenditures do not seem excessive. For whatever reasons, the literature seems to be underestimating the true cost of moving to DITA.

There can be a lot of costs related to DITA adoption. Here are some that might affect you. (Different implementations will vary somewhat.)

Most of us know about the cost of a CMS, which can set you back over $250,000 (or might be a lot less). You can do without the CMS, but DITA is designed for use with a CMS and you need one to get the full benefits. But the CMS is just the start.

In the early stages of your migration to DITA, you will likely need to hire DITA specialists (the consultants I mention above) to help you plan and set up your system.

You'll likely need more inhouse tools developers, and you may need developers with different skills than you currently have. This is not just to set up the new publishing system and so on, but also to troubleshoot publishing problems, adapt to new browser versions, and address all the bugs and glitches. In my experience there are all sorts of problems that crop up with the relational database (CMS), and also with the ditamaps, formatting of outputs, and many other things. Part of the problem is that the DITA Open Toolkit is notoriously difficult (and could require extra expense for things like a rendering engine). Some of the tools designed to work with DITA are arguably not quite up to speed yet. Your tools developers will also spend a lot more time helping writers.

(If you don't move to a CMS, but use something like FrameMaker and WebWorks ePublisher, you may find that you have a lot more headaches in producing docs without much in the way of DITA benefits.)

You need extremely skilled information architects to create a reuse strategy, engage in typing exercises, and design and edit your ditamaps. This isn't a skill set that people currently in your organization can easily acquire. Even most information architects have trouble adequately mapping topics. For a discussion of the sorts of challenges they'll face, see this series of posts; the moral is: if you don't have skilled architects working on your system, you may end up with Frankenbooks that are not particularly useful for your readers. I raise some additional topic reuse problems in an earlier post: link.

You'll need to spend significant time developing new processes, policies, and internal instruction manuals.

Your team will have to undergo intensive training. In coming years, new writers you hire will also need training. I have found that writers can move to Docbook XML with very little training, but DITA requires a great deal of training, not just for the CMS, but also to learn how to use ditamaps, reltables, and so on.

The migration of content will likely be quite time consuming, with manual work required to correct tagging that doesn't convert automatically, mapping, and a complete indepth edit.

Your writers will need to spend more time on non-writing activities. This can greatly reduce their productivity. Working with a relational database, especially an opaque one like a CMS, is much more time consuming than checking files out of a version control system. Creating reltables is a lot more work than adding links. Coordinating topics is a lot more work than designing and writing a standalone deliverable. Plus, there is a lot more bureaucracy associated with DITA workflows.

With most DITA implementations, topics exist in a workflow that only starts with the writer. You'll probably need more editors and more software.

You'll also probably need more supervisors. The DITA literature emphasizes the importance of assimilating writers to the new regime and then monitoring their attitudes. Pre-DITA, writers were project managers for their own content; with DITA they have to learn to hand that responsibility off to others.

There are some organizations, such as ones that have to cope with hundreds of versions of a hardware product, that have a clear ROI for DITA. But many (most?) organizations could find that DITA doesn't so much save money as redistribute money. Where before you spent the lion's share of your doc budget on salaries for writers, now writer salaries will be a much smaller proportion of your budget. In many cases, companies could find themselves facing higher costs than pre-adoption: they will never see return on their investment. And given the complexities of using DITA, ongoing hassles and escalating costs, some companies are going to find themselves having to ditch DITA and go through an expensive migration to another system.

Wednesday, September 26, 2012

Introducing "Information Designers Waterloo"

Mark Pepper and I created a LinkedIn group today: Information Designers Waterloo.

It's a local group for technical writers, information architects, editors, instructional designers, doc managers, doc tools developers, and all else who are interested in discussing tech writing issues, networking, job info, and... whatever this turns into.

I'm hoping we can have some lively discussions and some local meet-ups.

So if you're in the Waterloo area and if you're in the biz, please join up!

Monday, September 24, 2012

Think ecosystem

(This is part of my series on what I learned at the Fluxible conference over the weekend.)

Michal Levin of Google talked about the growing interactivity of devices, and what that means for app development. She said...

In our new era of connectivity, we'll increasingly be designing apps not for a phone or a PC, but for an ecosystem of device types (phone, tablet, PC, TV and... who knows). Here are three new design models:
  • Consistent design - For apps like Trulia, which have versions for the PC, tablet and phone. All should have a similar IA, navigation model, and look-and-feel.
  • Complementary design - For apps where devices control or influence each other. For example, with KC Dashboard you use a phone to throw darts at a dart board on a tablet. Some other examples interact with a TV: Real Racing 2, Slingbox, Heinekin Star Player.
  • Continuous design - When you start a process on one device and then move to others. For example, All Recipes starts on a web site (probably viewed on a PC) where users choose recipes for a meal. The web site downloads a shopping list to a phone, and downloads recipe instructions to a tablet.
The ecosystem approach is just getting started, but we should all be aware of it and plan for it. For example:
  • If you are creating an app for a single device type, you should consider ways to make it scalable and flexible for the future.
  • You may need to use cloud computing for the data sync required by multi-device coordination.
  • Some companies should consider changing their corporate structure so that development for phones, tablets and PCs is not done in separate divisions.
  • The ecosystem concept will expand as computer chips become ubiquitous in home appliances; as their use in cars improves; etc.
In his closing keynote, Dan Gardenfors of TAT picked up on Michal's idea of future extensions to our online world. He envisioned a world of public computing where screens are everywhere (on buildings, bus shelters, bus windows, etc). One of the uses of these public screens could be to have the ability to screen whatever we are watching on our phone. Other uses will be advertising and public information. He suggested we start thinking in terms of a mobile computing platform rather than a platform for mobile devices.

The future of IA... or what to do with all that content

(This is part of my series on what I learned at the Fluxible conference over the weekend.)

Karl Fast, a prof at Kent State, talked about the huge load of information that is available online and how we can cope with it. He doesn't have a solution, but suggested themes that information architects might pursue towards finding a solution. He said...

Our current ways for processing all this information are:

Intentional structure, such as a library or folders
Algorithms plus computation (the Google approach)
Loosely coordinated group actions (the Wikipedia approach)

Information is cheap but understanding is expensive - and we have not figured it out. IA is about understanding the world as it may be, and a fundamentally new way of processing information is needed but might not happen for a long time. In working towards that new understanding, he has come up with some themes that might prove fruitful. They seek to understand how people work with content. They all fit into the idea of cognition as extension. For example, when we do a jigsaw puzzle we look at the puzzle, using just our brain, but we also move the pieces with our hands. We combine pragmatic action and epistemic action.

These are the themes he thinks might be fruitful in moving to a better way to process content:

Deep interaction - He argues that our mental model is wrong: we think cognition is all in the brain, but it's really part brain, part external: gestures, inflection, content artefacts, people, devices, etc. (When we do usability testing we usually ignore a lot of really important stuff like gestures.) Scientists have discovered that we don't just use our brains to think: thinking is a mind-body integration. For example, gestures are so important that we don't think as clearly if we don't gesture. Another example: this study of Tetris users concluded that thinking is not all in the brain.

Coordination/orchestration - We coordinate a variety of online devices, books, a whole bunch of things. It's not as simple as processing one thing. We have multiple PCs, tablets, smartphones, etc., and also have non-electronic content to coordinate. To figure out how to process the vast new amount of info we face, we should think of PHINGS: Physical, haptic, interactive and information-rich, networked and next-generation, stuff on surfaces. These refer to:

Physical: not just the brain, but also the body and physical things
Haptic: non-verbal communication
Interactive and information-rich
Networked and next-generation
Stuff on surfaces: on screens or paper

Mess - We tend to treat mess as an a bad thing to be avoided, but mess is necessary and should be worked into our theory. "Mess is the soul of creativity." Messy desks, messy desktops, messy bookshelves. Mess is a fundamental part of reality and we need a way of describing it. The idea of mess is completely tied up in the idea of deep interaction and PHINGS. Mess reflects our reality.

While talking about mess, Fast told an anecdote about Steve Jobs. Jobs liked to popularize a myth that he lived a stripped-down, simple life, and played up a time when he had no living room furniture... but this is what his home office looked like:


All this is a bit difficult to comprehend, especially as it applies to a way to process online content. I was fascinated but not exactly led to enlightenment. I am not sure I have captured his meaning perfectly, but I'm going to keep Karl Fast on my watch list. You can find his published papers here: Medeley Profiles.

Update: Karl Fast recently left Kent State to become an information architect at Normative Design.

Apps suck

(I was at a user experience conference called Fluxible over the weekend. This post describes one of the talks I attended, and is part of a series.)

James Wu, lead designer for tablets at Kobo, gave a talk called "Rethinking tablet UX". His central insight is that users are interested in content, not apps. He said...

Technology sucks for most people. Unlike most of us at Fluxible, most people don't want to know, understand or learn the techie features of their devices. They don't like having their main tablet navigation be small impersonal icons that represent all their apps. They hate it that their content is stored within apps. They're interested in content: in their movies, music, pictures, books.

This is what people want to do with their tablet:
  • Find content
  • Organize content
  • Consume content
At Kobo, James has been involved in the development of a new tablet look called Tapestries, which is a way to let users focus on content. He calls this "organic curation". Users can create a tapestry, multiple tapestries, and sub-tapestries. The full sample he showed us was a page a woman might create to plan her wedding. It had lots of big pictures.
- - -
I wish he had also shown us something more meaty, like a tapestry I might use to store research on something.

In fact, for my personal use, I'm not convinced that Tapestries is the best application of this idea. But the idea is a winner. I hate apps. I hate having four screens of the stupid things on my phone. I hate having to remember which eReader has which books. I think content-centric UI design is/should be the next wave.

Sunday, September 23, 2012

Case study: LinkedIn web site fails?

It's astounding how an important, established web site like LinkedIn could have such utterly crap design on its Groups page. Using "More" as a category is always an unfortunate design decision (although sometimes necessary), but on this page, LinkedIn uses the "More" category three times (as marked in red).


I belong to a lot of LinkedIn groups, and I only ever click one setting on the whole page (other than group links). That option is in the second "More" listing and it's "Settings"...
... and the only thing I ever do there is stop LinkedIn from its default behavior of sending me endless emails about discussions in the group.

The kicker: the third "More" (in this view) doesn't even do anything.

Sunday, September 16, 2012

Musing about topic reuse

I first started writing in a dot command language called Script. Later I used TROFF and then LaTeX. But eventually the WYSIWYG editor was born (I had mixed feelings about it at first), desk top publishing applications and laser printers appeared, and Macs hit the market - and typefaces hit the world with a bang.

Tech writers, wanting to show off what they could do, started using typefaces like there was no tomorrow. Some manuals were so busy that it seemed like every word was italic, bold, colored, or in a different font altogether. They were hard to read.

(We still overdo typefaces to a certain extent. I would like to use bold only to designate words that I want to "pop" off the page, and not for UI controls and so on... but that appears to be a battle I have lost.)

Nowadays I wonder if topic reuse is a bit like those heady days of typefaces. When we say, "my doc set has topic reuse of 30%," that doesn't mean "my doc set needs to have topic reuse of 30%." There is a sense of "Topic Reuse Good, No Topic Reuse Bad." There is a need to justify spending a lot of money for tools like CMSs that facilitate topic reuse.

I have also noticed a tendency among some writers to pad out their deliverables with other people's topics when it isn't really helping the reader. When working in large writing departments, I have found my topics in odd places where a link would be more useful. In one instance I took a deliverable that was 50 pages in PDF form and deleted the reused topics, creating a much more focused, useful doc that fit on one HTML page. The reused topics in that example were actually harmful because they were pulled out of context. I know, topic-based writing isn't supposed to have context, but in many cases it does, especially with complex subjects.

The problem is that writers are given latitude - and are even pressured - to reuse topics when there is no clearly defined reuse strategy. In fact, I have never seen a well-articulated content reuse strategy. You see descriptions of the mechanics of reuse, like this one or this one, but they don't provide guidance on why to reuse topics and which topics to reuse.

Sometimes topic reuse makes no sense, like a doc set I once saw that repeated a large set of introductory topics at the beginning of every deliverable, which to my mind just clogged up the user experience. Even worse, a decision was made to include the topics in the HTML and not the PDF - the reasoning being that the topics weren't really needed and would add to printing costs - which confused readers about whether the two outputs were two different sets of content.

Sometimes topic reuse strategies are ill-considered, such as trying to use doc topics in training material. Docs and training require such different styles of writing that that can result in really bad output, and only seems to make sense when cost-saving requires highly compromised training materials. (Which, in that case, is fine, as tech writing is necessarily all about doing the best we can with available resources.)

Sometimes topic reuse becomes a sort of mania, an end-in-itself. I once saw a DITA topic that single-sourced a table that described fields in a screenshot. There were three versions of the screenshot, each completely different UI screens, each marked with attributes for different products. In the table, there were two or three rows that were not conditionalized. There were about a dozen other rows that each appeared three times, marked with attributes for the three different products. Updating the table was a nightmare, as you might imagine.

This is not to say that topic reuse is not useful. Anyone who has had to modify the same text in two places knows how important topic reuse is. But I have never documented hardware, so I have never worked on a doc set that required a lot of topic reuse. Consequently, approaches like this one, in a department where 48 writers produce 14,000 publications, were not even remotely applicable. (I have worked in departments with that many writers, but never with even one percent of that many publications. It is my contention that that is not the norm.)

My preferred approach would be to reuse topics when previously you would have cut and pasted the content, and otherwise a writer would be required to make a case for reusing a topic. My reasoning is that we must never force the reader to read anything more than they need to read. (Unfortunately, minimalism is often thought of in terms of our convenience and costs rather than in terms of reader usability, as it should be.)

We are in danger of reusing too much because reuse is easy and because there are non-reader incentives to reuse, as described above. The problem with my approach is that it could keep the stats on reuse low, which wouldn't help with proving ROI for the CMS or other tools that were bought with reuse as a justification. But it would help avoid a tendency to go hog-wild and reuse when it's detrimental to readers.

Saturday, September 15, 2012

We need a frank, open discussion about the problems with DITA adoption

My second post on this blog was Case study: DITA topic architecture, in which I described some problems I inherited (twice) with DITA topic architecture.

Thanks to Mark Baker, author of Every Page is Page One, the post was widely read. (He referenced it on his blog and also tweeted it.) The post got hundreds of page hits and generated several comments and a few emails. It also spawned a somewhat defensive thread on an OASIS forum.

I have a lot to say about DITA. I have been holding back because I was concerned that my new blog would be written off as a DITA-bashing forum. I have a lot of other, less controversial (or differently controversial) things to say, and I didn't want to turn off a whole section of the tech writing community even before anyone knew who I was. But it seems that despite my best intentions I have been branded an antiditastablishmentarian. :-) So here I go...

I think it's time that we have a frank and open discussion of the pro's and cons of DITA. For years now all discussion of DITA has been dominated by its proponents; we have heard plenty of arguments for why to adopt it. We need an open discussion not to bash DITA, but to uncover issues so that we can address them.

Here are just a few of the issues I want to address:
  • Has DITA changed tech writing output? Is there a discernible style to docs created in DITA? If so, is this what we want - and how can we change it?
  • How has DITA changed the work environment for writers? Do writers have less control over their content in DITA shops? What is the effect of that on quality?
  • How has XML/CMS adoption affected the creative process for writers?
  • What is the culture of DITA, and how widespread is it? Has the emphasis on monitoring writer's attitudes towards DITA changed the culture of tech writing?
  • How much is DITA really costing companies, when you include the need for enhanced tools teams and information architects, CMSs, and more time spent by writers on non-writing activities?
  • DITA proponents make claims about the cost of non-DITA solutions, such as that writers spend 30-50% of their time formatting. How true are these claims?
  • Has the rise of DITA increased the influence of consultants on tech writing? How has the agenda of consultants (to attract business) changed our profession?

These issues are of immediate, practical interest to me. I lead a doc team that uses DITA. I have authored in XML for 12 years. I have been a judge in the international STC competition (that judges the highest scored winners of the local competitions) for over a decade, giving me a chance to see the trends in our profession.

To the DITA proponents, I want to say that there is more that unites us than divides us, and to let you know that my goal is always to eventually reach common ground. My other, much longer-running blog is largely about politics so I have experience with this approach. I hope some of you will stick around to duke it out so that we can reach some consensus.

My bottom line is: I think there are some things to be concerned about with the widespread adoption of DITA, and we can't fix them if we don't acknowledge them. Let's dive in and see where the discussion takes us.


Friday, September 14, 2012

The messy side of metrics

As I have said before, I'm a big fan of diving into the weeds and figuring out data. Sometimes this is a lot of hard work with very little to show for it. Every once in a while it has spectacular results.

I once worked at a company that had an unsearchable knowledge base. I won't go into the technical details, but it was difficult to collate the data in it. Consequently there had been very little reporting of customer problems.

I decided to investigate the knowledge base. I was able to output a massive text file of all support calls logged over the previous 18 months. I went through it, cutting and pasting issues into other documents, which was about the crudest way imaginable to collate electronic data. It seemed to take forever, but in reality it took about 24 hours of heads-down concentration.

I discovered a number of things, but the most dramatic was that a single error message was responsible for a high percentage of support calls. It was relatively easy to write documentation to help users who got the error so that they didn't need to call customer support. (I had to write a spec for a compiler to attach the doc to the error message, but that is another story.) Afterwards the number of customer support calls fell dramatically.

It's sort of embarrassing to admit to the lack of technical savvy in my approach, but I think that's what makes the story worth telling. There isn't always a nifty technical solution for data analysis. Saying "It can't be done" often just means, "There's no easy way to do it." Also, when you handle data manually you notice things that wouldn't be apparent in an automatically generated report.

Thursday, September 13, 2012

Case Study: Metrics (feedback button data)

I once conducted a project to analyse the user feedback that my then-employer received from the feedback button on help pages. We had all been using the feedback to determine which topics needed attention and to understand doc usage. But I quickly saw that there were some pretty serious problems with the data.

If there was a bus to the future, would you get on it?

I would.

(This question was posed to me by my brother the other day... the sort of stuff siblings talk about over email while at work.)

Cautionary Tales: Metrics (performance measurement)

(I'm a big fan of IT people. Also, nobody likes measuring things more than me. But interpretation is everything. Or rather, a little knowledge is a dangerous thing.)

I once worked in a company where an IT team of three people provided technical support to about 1,000 employees. One of the IT guys was great and the other two were really bad. By bad, I mean that when an employee asked them to fix a problem on their PC, these two guys typically couldn't fix the problem and often also created new problems.

Then one day the good IT guy got fired. We learned later that the company had instituted metrics to measure performance of the IT department, and had determined that this fellow was too slow: metrics showed that it took him two or three times longer, on average, to close a case.

What everyone in the company knew (except, apparently, IT management), was that the good IT guy took all the complex and difficult problems, while the other two guys dealt with the sort of mundane issues they could handle. Difficult problems take longer.

I might have just passed this off as incredible stupidity, but the next company I went to also had three people in the IT department. They were extremely overworked (causing delays that affected productivity all over the company), but one day one of them got laid off. We learned later that the company had instituted metrics to measure performance of the IT department, and the metrics showed that there wasn't enough work for three. The metrics system only covered issues that were logged through a web site, but employees mostly just called IT without logging an issue.

Friday, September 7, 2012

Not all docs are public

Here's a quote from an interview in this week's CIDM Information Management. It's an intelligent and literate quote but I'm going to take exception to it anyway: "I think that Google changed our world as technical communicators. All of a sudden you could type in a series of random words and get a very close match to what you were looking for. That makes traditional informational organization obsolete: the back-of-the-book index and the table of contents go out the window. People want their little sound bite that corresponds with their one question that they have right then. We can no longer provide big fat manuals." (link: Tracy Baker Shares 20 years of Tech Comm Experience)

I have heard the sentiment several times lately that Google has made traditional information organization obsolete. The problem is: of the ten companies that have employed me as a technical writer, only two produced documentation that was publicly available on the internet. At my current job, my docs are not public.

When my docs have been available through Google, readers tended to use Google at the start of their doc search. Google is undeniably the best route, and can lead to a slew of information sources.

Monday, September 3, 2012

Skipping the steps

I was updating some end user docs today that were task-heavy, and after a while could barely bring myself to read through the highly repetitive, obvious steps.

It dawned on me that none of the procedures were doing much to help the user. The user needs to know:

  • That they can do things.
  • Where to do them.

After that, they can figure it out. (I mean, this is really easy stuff.)

It's a rule of thumb of tech writing that you don't document wizards. Some wizards may require help buttons on the screens, but no wizard should require user manual content.

Why not, I thought, use the same rule of thumb for simple tasks? So I happily ditched a lot of carefully written numbered steps and replaced them with much briefer sections per task - sometimes just a sentence.

I did this as a provisional approach until I get some feedback. But quite by accident tonight I stumbled on a post in the blog Gryphon Mountain Journal that presents research that comes to the same conclusion (Project Pinnacle, Episode 4: Rethinking What the Users Need).

I love excising useless bulk!

Sunday, September 2, 2012

Challenging misconceptions about readers

A problem many employees have is to make incorrect assumptions about their customers.

You can start to understand misconceptions in your writing department in the following ways:

  • Research the writers.
  • Work backwards from defects in the docs.
  • Understand the types of misconceptions that typically occur.

The most common and pernicious assumption that writers make about customers is: "The customer is just like me." Often this assumption is a stand-in due to a void of knowledge. Sometimes it is more ingrained than that.

Another very common misconception is that customers are more focused on the software product than they actually are. In most cases, customers use a range of software products during the day, each with its own terminology, metaphors and processes. They might only use your product once a week or month, in which case they might not remember details between uses. Few of them will be expert users of the documentation, so you shouldn't expect them to remember a caveat you posted in the introduction or a definition you stuck in a glossary.

Another common misconception occurs at companies where developers provide customer support when problems are too difficult for support staff. The side-effect of this activity is that the developers start to think of the problems they solve as the norm and consequently over-emphasize the edge cases. They pass this bias on to writers, resulting in content that confuses most readers.

User research has two parts: the research and the dissemination of findings. Understanding misconceptions is especially important to the latter part. For example, personas should be designed to provide writers with user characteristics, but should also attempt to correct false assumptions. The pernicious thing about misconceptions is that they are internalized; frequently people don't even know that they're making the assumption. We don't always have to point out that their assumptions are in error; often it is best to just provide better information.

Saturday, September 1, 2012

Personas should be prescriptive not descriptive

There are lots of things that R&D departments should be doing to collect information about customers, including:

  • Direct methods: large-scale surveys, usability testing, round-table discussions, focus groups, interviews, in-situ observation, ethnographies.
  • Indirect methods: surveying sales and support staff, mining the support knowledge base, collecting web usage metrics, collecting data on searches.
There are lots of ways that R&D departments should be helping their staff learn more about customers, including:
  • Presentations, lunch 'n learns, written reports disseminating research findings.
  • Metric dashboards that employees can use to track customer responses.
  • Programs that let employees listen in on customer support calls.
  • Poster campaigns.
  • This list could go on and on.

Personas are just one way, among many, to disseminate customer information to employees. The thing about personas that many people don't get is that personas are fundamentally different from every other way to educate employees. Other methods present info and require interpretation. With personas, all that is done for the employee: there is no (or at least little) interpretation.

A small set of personas is going to be used as the stand-in for the entire universe of customers when employees make decisions about product design, implementation and documentation. They are a powerful tool that can fundamentally change outcomes.

Consequently, personas have to be prescriptive rather than descriptive.

Whenever you talk about personas with information architects, the first thing they'll say is "Personas must be based in research! They're garbage otherwise." But that is really missing the point. Yes, we should not just make stuff up about customers. We must have a solid foundation of real world knowledge. But NO, personas should not be derived from research.

Here's an example where the universe of users differs from targeted users: A company has APIs that app developers are using to develop games. Game developers make up most of the user base. But the APIs aren't fully functional for good game development, and the company doesn't have the resources to beef them up. The APIs are really only useful for porting games from other platforms. The documentation needs to be very different for porting games than for developing them from scratch.

Here's an example of personas that have no relation to the real customer base (I believe this comes from Alan Cooper's The Inmates are Running the Asylum, but I can't find my copy to verify that): When airlines were first developing those TVs that are on the back of seats, they used two personas: a young boy and an old woman. They deliberately chose edge cases because they wanted to be sure that the UI and controls would be simple enough that every passenger could use them. The personas are far from the average customer. The average customer is probably a 35 year old white male who is an expert user of complicated gadgets. Had those personas been based on customer research, airplane TVs would have many more features and be much more difficult to use.

Once we accept that personas should be prescriptive, then we can see some implications for how we should be developing them:

  • Personas should not be created by researchers. They should be created by, or at least under the direction of, product managers and senior management.
  • Research should be part of the persona development process, but any description of users should be checked against the question: And is this who we want to be developing for?
  • Research will be useful in filling in "color" details of the personas.
  • Personas should be considered temporary constructs that are useful for limited time periods. Customer targeting can change frequently, so personas should be tweaked for every product release cycle.

I foresee the other comment I always hear from information architects about personas, and that is: "You can't confuse development personas with marketing personas!" I'm not.

In my experience most attempts at personas fail, and I think the reason they fail is that they are descriptive. They are created by low-level employees - tech writers, researchers, information architects - who think they can derive personas from customer data.

I was starting to write this post a few days ago when I wrote my musings about visionaries and the lack of vision in software development organizations. We need a lot more vision in our companies, and a lot more dissemination of vision to writers and developers. The failure of persona development in many companies is just part of a larger failure of vision.

Note: My thinking about personas is markedly different from my usual approach to customer research. In most areas I favor a grassroots approach in which writers perform research themselves. Direct contact is always more pithy than only reading a report.

Wednesday, August 29, 2012

Musing about visionaries

I was most affected by Walter Isaacson's new biography of Steve Jobs. The book is a fascinating history of the personal computer, as well as a case study for how products are developed. I'm neither a Jobs fanboy nor a Jobs hater, and actually never paid much attention to him (even though, working in 'the biz', his name came up a lot). The Apple revolution sort of passed me by. I worked on Macs for a couple of years in the 90s and my main reaction was that they were underpowered. I'm simply too cheap to buy an iPhone or iPad, even though I recognize their value in opening up a new world of integrated data. Before reading the book, if you asked me to sum up the Steve Jobs' vision, I'd have said something like, "making things that are shiny and white".

The thing that struck me most about Isaacson's book is the advantage of having a visionary driving product development (and I now get it that Jobs was the ultimate visionary). In fact, since reading the book and thinking about places I've worked, many of them now seem utterly without vision - rudderless, or worse, propelled solely by interdepartmental politics and personal empire building. The exception is R&D departments run by a VP who has been on the team for a long time and who immerses himself in all the details of development. I have worked in a few companies that were run that way, and it seems an ideal structure - although none of them performed spectacularly. That lack of success may have come from the fact that R&D VPs tend to be developers who rise to the top, and possibly they bring with them too much reliance on a certain approach. As Alan Cooper would say, they are the inmate running the asylum.

Management visionaries are a double-edged sword. If they get it right, they're aces. But often they get it wrong, as Mike Lazaridis famously did when he decided that smartphone users would never embrace apps so the iPhone was not a threat to the BlackBerry. You can't expect visioinaries to succeed without quality inputs, which gets back to corporate structure: you need your visionary to be working with customers, feeling the bottom line, feeling the pain, feeling future trends. If your visionary is by nature an engineer, you might want to focus them on areas more suited to their mindset than things like consumer trends.

I was sympathetic to Jobs' frustration in getting his employees to do things his way. I wouldn't have wanted to experience his tantrums, but I can see why he had them. He never seemed to be able to get through to anyone - including Isaacson - that we need a fundamental change in our approach to developing products. Jobs’ employees wanted to create features while he wanted to create a user experience. He wasn't always right (or so it seems with hindsight), but his approach was.

In most companies it is a slow, difficult process to change the way people approach their work. So much is determined by corporate structure (eg you get very different results if the Doc department reports to Marketing or R&D). So much is determined by corporate culture (a bureaucratic mindset can deaden any initiative). Even when the people at the top try to change things, they don't always succeed, or succeed fast enough. To get back to the example of Mike Lazaridis: once he saw the light, was he ever successful in firing up his employees about superapps? I don't think so.

I don't want to say that visionaries are always at the top. Visionaries can exist at every level of employment, bringing that special something extra to whatever they're responsible for. The tragedy of mediocre managers is that they tend to feel threatened by visionaries and try to stomp out their initiative. It is also the tragedy of some corporate processes. In a DITA world where the writing process becomes a factory line of inputs, how do you handle writers who really get the user experience and have a vision for how to improve it? Sadly, the DITA revolution that is sweeping the industry seems determined to wipe out great writers. If doc management were able to recognize that trend, perhaps they could find ways to empower writers, even in the DITA paradigm. That would make a good talk for a CIDM conference: "The Effect of DITA on Writer Creativity, User Focus, and Empowerment - and How to Reverse It."

Tuesday, August 21, 2012

Tips for finding a tech writer job

I'm just going to bung down everything I can think of here, so some of it might be somewhat obvious. (And if you're wondering why I think I'm qualified to tell people how to find a job, the answer is that fortunately or unfortunately, I'm an expert!)

Form relationships with agencies
The ideal situation is to have agencies and headhunters contacting you about jobs, rather than you just applying for things. You should apply directly to agencies. For example, look up local sites for Procom, Ian Martin IT, Stratix, Tech Capital Partners, Bagg Technology Resources, Redwood Global, ProVision, Tundra Technical, or Silver Creek Partners. (Those are just some that I've worked with. There are tons of others, and a growing number in India.)

Sometimes a headhunter or agency will ask you in for a general interview. Depending on how you do, that could get you on their list. At the interview, the recruiter can give you all kinds of info about the current job market. They will also often make suggestions about your resume, and I recommend that you take their suggestions very seriously. They know.

To get on the list of headhunters, it helps to apply for lots of things, especially if the application goes to an agency.

If a headhunter or agency contacts you about a job, always respond. If you're not interested in the job, offer to tell your friends about it. If you’re a useful contact they’ll be more likely to keep you in mind.

Even when you're not interested in a job, always ask how much it pays. Finding out the going rates is really useful. Sometimes when asked what price you want, if you're out by more than a certain percentage the employer won’t negotiate.

Once you have a relationship, stay in touch. Send them resume updates. Let them know you're still looking. Connect with them on Linkedin.

Post your CV on job sites
Employers, agencies and headhunters regularly search through job sites such as Monster, Workopolis and Dice. (Here is a Canadian site for Dice: link)

Employers and recruiters search these sites electronically, so make sure you include all the keywords, software and skills that they will use when looking for someone for a job you want. Look at job ads to figure out what those words are. List all the tools you've used, even things you could pick up quickly. For example, if they have a requirement for a writer who knows Word and Excel, they might only pull CVs of writers who have those listed.

Modify your profile regularly. Adding or modifying a profile makes it more visible to employers.

Use Linkedin effectively
Some employers and agencies restrict searches to people who have three recommendations, so make sure you have at least three. And don't forget to add skills and get them endorsed.

Join Linkedin groups: they have interesting discussions, often have job postings, and are great ways to network. There are groups for former employees of many companies and many professional groups. Here are some tech writing groups: CIDM, Content Strategy, Documentation and Tech Writing Management, Information Design, Technical Publications & Documentation Managers Forum, Technical Writer in Action, Technical Writer Forum, Technical Writer of Writers, Technical Writer Worldwide, Technical Writing & Content Management, The Content Wrangler Community, User Experience, Users of Madcap Flare, Writing for Translation.

Google linkedin tips for more information specific to you.

Set up email alerts
Set up email alerts on a few sites so that you're notified of openings. No site has all job openings. Eleuta, Workopolis, Monster and Dice have services to email you with jobs that meet your criteria. I also really like indeed.com, which trolls corporate career pages.

You can set up multiple alerts from the same site; for example, sign up for tech writer jobs, journalism jobs, and information architect jobs.

Corporate Careers pages
Look for every careers page (corporate Careers pages, pages for levels of government, universities, careers lists, etc) that might someday have a job posting you'd be interested in, and favorite it in Internet Explorer. Every few days, run through the list of favorites and check every one.

Create a profile at every Careers page that allows it. It’s a pain and I don’t think I’ve ever been contacted by a company that asks for profiles, but you never know.

Bookmark other sites
Bookmark sites that might have job postings you're interested in, such as Southwestern Ontario STC, www.jobs.ca, Data Shaping, Charity Village, Mobile Dev Jobs, and the US STC job bank.

Check out kijiji and craigslist for your target areas. Check out temp agencies too. Temp agencies don’t tend to have great jobs but they could be a decent stopgap, as well as get your foot in the door of companies.

Contract work
In the last couple of years contract work has overtaken full-time employment, doubtless because of economic uncertainty. If you're married and have benefits through your partner, contract work can be very lucrative. But in any case, consider embracing contracting: educate yourself about how you can maximize your tax deductions, purchase health insurance and any other benefits, and so on. I spent two years living in fascinating places making a lot of money doing contract writing.

Here's a useful link for freelance and contract work: Freelancing: Are you ready>.

Online freelance work
It is possible to make a good living with online freelance work, although most job postings are scams (for example, their sole purpose is to attract traffic for clicks on ads). Online freelance work can include technical writing, proofreading, editing, ghost writing, and journalism (web pages, ezine articles, blog posts, product reviews, etc).

Volunteer
Volunteer to do work you enjoy, based on your career, hobbies, family situation, or whatever. It will give you something to put on your CV and talk about in an interview, as well as meet possibly useful contacts, get your foot in the door places, and keep your mind limber.

Career counselling
Go to the Careers Services department at your alma mater and ask for an appointment to get some advice. The University of Waterloo careers department has a boffo career consulting service. If you are a UW alumna, you get three free sessions; otherwise there's a modest fee. It is well, well worth it. You can sign up on this site, which also has lots of great info: UW career action.

More advice
There are zillions of online sources of advice, but here's a good one: STC job bootcamp.

My main piece of advice is to have a friend revise your resume. The biggest mistake on resumes is that people don't sell themselves sufficiently: an objective person can point out where you need to beef up your sales pitch, and how.

Other ideas? Please leave a comment!

Note: This post is an expanded version of a post I wrote for my other blog, here: Some tips for finding a job.

Wednesday, August 8, 2012

Just Say No

I once had a very wise doc manager who refused to allow the production of an install guide. When we found things in the install process that required documentation, he said, "Log a bug."

As a consequence, we had a great install process. And GREAT docs.

Thinking about writers

Being a good tech writer requires a lot of smarts and a lot of finesse. It's not just about knowing the authoring tools, learning the product, and knowing how to write.

It's understanding users, corporate priorities and the market.

It's learning which SMEs you can trust and which require double checking. It's learning how to motivate SMEs to help. It's learning how to work within other people's busy deadlines, to know when to push and when to hold back.

It's having the humility to accept other people's edits and the chutzpah to refuse other people's edits. It's having the tenacity to keep plugging away at something till you get it right, even though that process spans releases.

At your average tech writing job, it takes two years to become an expert in the product and customer base, form the necessary relationships, and learn the doc set. It is therefore important for companies to retain tech writers and to nurture them. Nurturing them means training, but it also means helping writers become more comfortable with responsibility - with honing and trusting their instincts.

All the supervision, editing and approval processes in the world won't make up for low quality writing. Unfortunately, all too often what holds writers back is all the supervision, editing and approval processes.

Writers have to "own" their areas of responsibility. Some writers won't do as good a job as others, but with time, training and nurturing, they'll get better. If possible, writers should see direct user feedback on their writing. Writers need to learn to have the reader perspective sitting in the back of their mind at all times. Every decision on terminology, wording, which information to include, topic length, and every other little thing should be made while thinking about the reader.

Some DITA CMSs include workflows that take doc ownership away from writers. I once had an editor who told me that our workflow was like a relay race - I as writer had prepared the first draft but then must pass the baton on to others. The editor didn't know anything about my users, the product or the market, and some of her edits created inaccuracies, but I had to fight for control of my content. That's the wrong way to operate.

Editors and information architects should not take responsibility away from writers. They can set guidelines and they can provide consulting services, but a writer has to be the expert in their own area. Otherwise it's just garbage in, garbage out.

Wait, you may say, what about hardware reference documentation that's essentially a list of specs? Well okay, there's an extreme end of publications that isn't really writing at all. But with tech writing, no matter how structured topics are or how complex a CMS is, at one end you have a writer and at the other end you have a reader. All the rest is secondary.

Thursday, August 2, 2012

Writing documentation in an Agile environment

Agile was created by and for developers, and documentation does not fit naturally into its processes. Doc managers need to be proactive about integrating writing into the process. In my experience, doc departments take too passive a role.

Here are some of the difficulties with putting tech writers on Agile teams:
  • For much of the Agile process, the software is not finished enough for writers to begin working on it. If writers start writing before a feature is complete, they may have to do a lot of rewrites. This can be a time waster. It can also lead to mistakes in the end process, especially if aspects of the feature are mentioned in several deliverables.
  • Depending on the writing requirements of each Agile project, a writer might need to be part of multiple projects. I was once on seven active Agile projects at the same time. It's not feasible for a person to attend seven daily standup meetings, even if they're only 15 minutes each.
  • Agile is designed to get developers to act as teams, so it includes every Agile project member in scoping and other team decisions. It is not necessarily useful for writers to participate in those decisions.



When figuring out how to fit into Agile, doc teams should keep the goals of Agile in mind. See your company's Agile evangelist for details for your company, but my initial list of goals would be to empower employees, create super-functional teams, reduce useless bureaucracy, create user-focused output, and respond quickly to market needs. Doc managers should go to the team that organizes Agile in their company and provide that team with some rough requirements for integrating writers into Agile. After that, it's a matter of working with the Agile team to finalize a list of goals, requirements and rules for writers. (There might be different sets for different levels of writer.) Every context has its own needs, but here are some sample doc goals, requirements and implementation rules. (Note that this is not intended to be a complete or cohesive set; plus, ideally the goals, requirements and rules should align.) Sample goals:
  • Writers should have more and better communication with developers.
  • Writers should have a thorough understanding of users, the market and the product.
Sample requirements:
  • If there are no product specs, the writer needs comprehensive use case scenarios, a working sample to test, etc.
  • The doc manager will not be on the Agile teams, but needs to evaluate and prioritize resources.
  • Writers should sit with the developers on their Agile project.
  • Writers should review terminology and usability issues early in the design phase.
  • During the project, writers should review resource strings (error messages, UI text, etc).
  • Writers {need to | should not} participate in multiple Agile projects at the same time.
Sample Agile implementation:
  • If a senior writer is on an Agile team, the writer should assist the Product Owner with the creation of user stories.
  • Writers should attend only review meetings until they start writing.
  • Writers should start writing when the feature is complete, stable, and has been tested.
  • Writers should write throughout the process, documenting every iteration.
  • Writers should work one iteration behind the rest of the Agile team.
  • The writer should be considered a Stakeholder for the project.


Re the requirement that writers will begin documentation only when a feature is complete and tested, Development may want iterations of the product to be documented for internal testing or other reasons, which is fine - but may require more writers. In addition, if a company wants to continuously deliver software updates with docs - and do it at a lightning fast pace - they might have to realize that doc productivity will fall and they'll need more writers.

I think the reason that doc teams aren't proactive enough in stating their requirements is the way that Agile is often introduced. There is a lot of emphasis on denouncing other processes, assimilating employees, and broaching no dissension. (This is a lot like the way DITA typically gets introduced.)

Agile evangelists (like DITA evangelists) frequently feel that the most important issue is employee attitude, and they are overly sensitive to anything that might be considered negative. But docs simply aren't an easy fit for Agile, and it's important to figure out where the problems are before devising a solution. All too often, writers are just made members of Agile teams and left to muddle through - which usually doesn't work very well.

Wednesday, August 1, 2012

When design and docs fail

Two Elections Canada workers were fired this week because they didn't understand the difference between encryption and compression. They were told to secure a USB flash drive that held information about voters. Instead of encrypting the data they compressed it, thinking compression was the security feature. Then the drives were stolen. (link)

I have documented encryption and compression, not for end users but for developers. When I read the article I had an immediate understanding (or at least guess) of why the Elections Canada workers were confused. The failure might have progressed something like this:
  1. The internal process of encrypting and compressing data is similar (even though the end use is very different) so when the API was designed, encryption and compression were combined. For example, an API that I once documented used the encode method to both encrypt and compress data. Developers chose encryption and/or compression by setting parameters in the method.
  2. The tech writer followed the API design and documented encryption and compression together. Yes, that's what I did. Even as I did it I realized it was wrong, and I think that's why it stuck in my head. Worse, I went on and on about all the fancy encryption options but threw in compression as a side note. (There just wasn't that much to say about compression: it was essentially a trivial add-on.).
  3. The app developer followed the API design and the API documentation, and again implemented them in one feature in the UI, calling them Encryption and Compression.
  4. The tech writer for the app followed the design of the UI.
  5. The end user was faced with a USB drive that they knew had a security feature, and the UI had two side by side features, Encrypt and Compress. They guessed and chose the wrong one.
I checked out some secure USB drives, and I can see why users would be confused. The terms might be explained in the end user booklet, but if it's not clear in the UI then they probably didn't see it. Or perhaps they saw it but their eyes glazed over because of the lengthy descriptions of encryption algorithms. It would be much better for the app UI and end user docs to separate the two features and call them "Password protect your data" and "Compress data to free up file space." Important features should be completely clear in the UI, without having to resort to the docs. Way back in the API design, security features should have been separated from compression features. At the least, the developer docs should have handled them separately.

Tuesday, July 31, 2012

Case study: DITA topic architecture

This post is part of a series of posts that question some of the claims made about the benefits of DITA adoption.

(Note: DITA uses the word "topic" to refer to a reusable module of content. Out in the rest of the world the word topic tends to refer to an HTML page or a section of a PDF. This may be seen as an infuriating oversight, but I suspect it's actually deliberate. For a while I tried replacing DITA "topic" with "reusable module of content" but I have given up and now just use the one word to mean multiple things.)

I started with DITA several years ago when I got a job in a large doc team that had been using DITA for a while. I inherited a few deliverables and was appalled at the way the content was broken up into concept, task and code sample topics. We optimized for HTML output and there were too many brief HTML pages that users had to click through. Even for a simple idea that could have been covered in one paragraph, these docs would have three topics. For example, a description of how to stop the server would have a concept, task and sample topic, each appearing on separate HTML pages.

My audience was developers. I did quite a lot of usability testing with them and found they were furious about the documentation. They hated having to click through multiple tiny pages. They hated the minimalism and choppiness. They described the docs as unfriendly, officious, insulting and unhelpful.

Conflict in the Pink Ghetto

When I was a child I had a recurring nightmare about pink and black. It always started with a visual of pink fluffy clouds, not dissimilar from cotton candy, and it gave me a feeling of intense well-being. Then a viscous black ooze would start to infiltrate, a little at first and then growing until the pink was obliterated, leaving me unsettled and frightened.

Nowadays I am far, far from childhood and I work in the field of technical writing. I was a tech writer early in my career and chose to return to it in the 90s, after deciding that I am best suited to cerebral, creative and mostly solitary work.

The problem was that, at least back then, technical writing was a classic pink ghetto occupation - female dominated, not highly regarded by other workers, and not easy to move up out of - and that this caused it to be a rather hostile environment.

Case in point. When I returned to tech writing I worked in a department that had about 50 writers. We produced excellent documentation on complex, technically challenging topics. The writers, who were mostly female, were all extremely bright. Almost everyone had a university degree in a math or science followed by a two-year tech writing diploma.

It was a vicious place. Writers regularly yelled at each other, threatened physical harm, and lodged harassment complaints against each other. The turnover in writers was staggering.

My next job had a mix of men and women. It was to my mind an excellent place to work - fascinating work, great pay, private offices, good treatment - but most of the other writers hated their jobs. One writer refused to implement tech reviews because he said it damaged his self-esteem. Two writers worked less than half the time they were supposed to. Another writer, the manager and I carried the load for a department of seven. It has been my experience that slackers can become paranoid and nasty, especially when they're caught in a web of lies to hide their lack of productivity, and that described this group.

The bad behavior of tech writers is a well known phenomenon, or at least it was up to five or ten years ago. I have heard the argument that writers behave badly because they're lesser-paid and lesser-respected members of R&D departments that are otherwise staffed by brainy, egotistical developers. In this argument, it's a fight for the top of the bottom rung.

However, there are other bottom-rung departments in R&D divisions (QA, testing, build) and I never saw them exist in a state of constant nastiness. Likewise, there are other female-dominated professions (marketing, PR, EAs) that don't seem to have these problems. Tech writing is perhaps different in that it is by nature submissive; we writers are always asking other people to give us information, and then always getting reviews that tell us what to change. That position rankles some people, and maybe it causes aggression.

Over time the tech writing profession managed to reinvent itself. It found a way to earn the respect it always deserved and needed. Job titles changed. Some people went the techy route ("Information Product Developer") and some the usability route ("User Advocate"). Previously, the cherished skills included layout design, grammar and clear writing. Now it has become more important to be skilled at complicated tools and to know markup and scripting languages.

As with many good things, the golden age of tech writing as a professional, respected profession seems to have lasted but a nanosecond. It is being replaced in many organizations by the mindset that came along with DITA.

DITA is a specification for creating documentation in XML, but has brought with it a whole new approach to tech writing. It's difficult not to resort to mixed metaphors here. In the DITA world, writers are treated like children and are cogs in a wheel. Previously a team of writers worked together to write with one voice. With DITA, writers follow strict guidelines and templates to produce small units of content. There is a tendency in DITA processes (enforced by CMSs and other tools) to remove responsibility for the final product from the writer. Workflows are imposed in which the writer creates the first draft, but then hands it off to editors and team leads who make changes and give approvals. Everything is dictated by the workflow - everything but quality, user focus and a process of continual improvement, which somehow got forgotten.

Tech writing changed with DITA because DITA is a top-down orthodoxy. The problem may be that DITA was influenced too much by consultants, academics and tools vendors, people with personal agendas and too great a distance from real world tech writing. The problem may be that it was developed for hardware documentation (doc sets that require a great deal of content reuse) and then was applied out of context. I think the mindset is gaining a foothold because it's a continuation of the drive to make tech writing seem more serious and difficult. (We develop reusable modules, just like object-oriented programming!) Paradoxically, the mindset is hindering writers from doing their best work.

DITA has also resulted in mind-blowingly expensive budgets for tech writing departments. DITA is optimized for working with a CMS, a relational database that can set you back over $100K. XML authoring has no off-the-shelf solution, and requires docs tools teams to troubleshoot, maintain the system and upgrade. Pre-DITA you could assign each writer some subject areas and leave them to it. With the new modular style, many organizations feel the need for more architects, editors and team leads to oversee the content production process. In some organizations, doc departments have a ratio of less than 2:1 of writers to support/supervisory staff. And many organizations don't achieve the degree of content reuse that justifies the extra cost.

None of the trends in tech writing occurred at all places simultaneously or for the same reasons, and some organizations will skip them altogether. But there seems to be a pulse of positive and negative influences, with a steady movement of tech writers trying to look more like developers. And in the predominant dialog of the last decade, there is far too little focus on the reader.