Five years ago, before the Covid-19 pandemic, I gave several talks about artificial intelligence. These talks were designed to be informational. They covered topics including:

  • What is a cognitive system?
  • How does a cognitive system work?
  • What is an artificial neural network (ANN)?
  • How does an ANN work?
  • What is a natural language processor (NLP)?
  • How does an NLP work?

At the time, the technology was relatively young. Sure, IBM Watson had been around for more than half a decade. It had even won Jeopardy by then. But advances in AI are so fast, they seem to be made in exponentially fast time.

Five to ten years ago, natural language generation (NLG) was in its early days. While there were NLG engines, the technology was not at all ready for prime time. Even programming a simple chatbot was mediocre at best.

Visual recognition, on the other hand, was quite advanced. And it has continued to be perfected to this day. We are no longer surprised that a computer can tell us what type of plant we are looking at, or read terrible handwriting on a check we need to deposit. We have come to expect a high level of accuracy from visual recognition software.

But back to the talks I gave. While my purpose in providing the information was to educate people about various topics in the AI field, the questions people asked had less to do with the technology and more to do with the threat.

“Will AI take my job?”

Back in the 2010’s, no one wanted to admit the inevitable. It seemed to me that people discussing AI walked on eggshells, careful not to get people worried.

I, on the other hand, was very clear in my response.

“Yes,” I said, “AI is coming for your job.”

Needless to say, this did not go over very well with my audiences. People argued that AI would never be as good as an experienced technical writer or editor. Surely it would take years, if not decades, before the threat of losing their job would materialize.

I really hated to be the bearer of bad news, but the proverbial writing was on the wall. Natural language generation was developing and technical writing was, in my opinion, one of the first types of content NLG would replace. I was sure of it.

And it makes sense. Good technical writing states the facts and explains concepts in clear and simple ways. Content that contains instructions for mounting a device in a rack, for example, turning the screws to a specific torque to tight, and plugging it in, are among the easiest, most formulaic types of information to provide. In fact, the problem with human-based instruction is most often that people don’t like to say the same thing, the same way, every time they say it. And writers tend to fight against using the simplest terms and sentence structures possible to get their point across. That type of writing is too “machine-like.”

Oh the irony.

I have spent the better half of my 30 year career trying to convince technical writers to keep it simple. Begging them to write short sentences. Imploring them to stop switching terms on people.

And now we have ChatGPT. Everyone is talking about it. Lots of people are trying to use it. Many are trying (and succeeding) to trick it. But the reality is that everyone can now see the promise of NLG. Feed it enough good information and ChatGPT really can write content.

Sure, it can be trained to produce awful content. That’s actually quite easy. What is important, though, is not what ChatGPT can do today. It’s what NLG will be able to do tomorrow, next month, and next year. Within an extremely short period of time, NLG will be able to create simple, easy to use, and accurate technical content. There should be no doubt.

If you are a technical writer or editor, AI is coming for your job. And it is going to happen sooner than you think.

Val Swisher