I just returned from the 60th (can you believe that?!) STC Summit. STC is the Society for Technical Communication and the conference was held in Atlanta. I presented one session about global content strategy. It was well-attended. And other than the one guy who was offended because I’m not a fan of Simplified Technical English, I think most folks came away with at least one new piece of information out of my talk.
By far, my favorite conversation of the conference was with Mark Lewis. Mark is a content strategist and DITA educator for Quark. He is also the author of the just-released book, “DITA Metrics 101 – The Business Case for XML and Intelligent Content,” from Rockley Publishing (ISBN 978-0-9865233-4-2). I will be honest – since the book just came out and I just came back from a 2-week vacation, I have not read the whole thing. From the chapters I have read, I like the premise of the book and I think the topic is extremely important.
How DO you make a business case for intelligent content? Going the structured authoring route is not a cheap or simple decision. And, I would add, not everyone needs to do it. But, if you have decided that intelligent content is for you, you are going to have to come up with some significant bucks to make it happen properly. The beauty of Mark’s book is that he puts actual metrics and numbers against all of the things that we kinda-sorta-know save money when we move to a structured environment. Using sample use cases and real numbers, Mark shows you how you can glean savings in a number of areas:
- Writing efficiency
- Topic reuse
- Topic maintenance
- Review process
These are the metrics that are often left to the realm of magic. Using spreadsheets and calculations, Mark lays out how to make a business case to show return on investment.
Back to the discussion – Mark and I sat down to talk about the effect of intelligent content on the cost of translation. In his book, Mark has the usual categories well-defined. These include translation savings from reusing content and savings from taking advantage of translation memory. These metrics are important and they are rather easy to calculate.
In addition to reuse and translation memory, there are a number of hidden costs in translation that we often overlook. In no particular order:
- In-country review and iteration. That is, the time it takes for each in-country reviewer to review each translation. And the number of times each reviewer iterates with each translator. If we can minimize iterations, we can save money. After all, the in-country reviewer is probably paid by your company. Reviewing content is undoubtedly not his or her primary job responsibility. We never seem to take into account the cost of these hours. And boy, do they add up.
- Multiple translations of very similar source segments in the translation memory. It happens. In fact, it happens all the time. Over time, most translation memories get filled up with different versions of the same source text (kinda similar, but not exactly, really). And the different versions of the source text each have different translations. In fact, sometimes we get different translations for the same source text. It happens. At that point, we usually call the translation memory “garbaaaage” and find that it isn’t as usable as our translators would like it to be.
- Multiple translation memories, each with different translations for the same or similar content, each being housed by a different translation vendor. As my readers know, this is one of my soapboxes. Most sizable companies use at least two or three translation vendors. And most translation vendors are loathe to share the translation memory with the competition. This means that you end up with multiple translation memories, one for each vendor. Now, if you can manage to keep your languages separated by translation vendor, perhaps you won’t have duplicate entries in the different TMs. But many, many times, different vendors work on the same content in the same languages at some point in time. And now, you potentially have three sets of “garbaaaage” rather than one. This is expensive.
- Non-standardized terminology. Mark and I spent a lot of time talking about terminology (you know this is perhaps my favorite topic of all). As I like to explain, structuring content is great. Making small topics from monolithic structures is fabulous. But if you do not standardize your terminology, and you reuse topics across multiple deliverables, the resulting content probably won’t make a lot of sense to your reader. It certainly won’t be standard or uniform. This problem has a number of costs. Certainly it affects the cost of translation. Say the same thing, the same way, every time you say it. What is the cost of a confused and frustrated customer? How do we quantify that? Support calls? Lost business?
I have asked Mark to start thinking about how we might model some of these hidden costs. That way, we can further make the business case that intelligent content saves money on translation. And, I will add, clean, accurate, simple, and well-written source content (whether structured or not) also is a big part of a business case for translation savings.
Thanks, Mark, for the very interesting chat!
- Delivering Personalized Experiences at Scale: The History of Personalization - November 18, 2020
- Delivering Personalized Experiences at Scale: The Three Kinds of Output Types - November 1, 2020
- First Things First: Workflow Before Content Goes to Translation - August 17, 2020