I recently hired a research assistant. Its name is ChatGPT.

First, a little about me. I’ve worn many hats across a career that dates back to the mid-1980s, but nearly all of them have had something to do with writing in general and usually technical writing in particular. I’ve been a writer, editor, team lead and occasionally project manager. However, for the last seven years or so, most of my work has been for content strategy projects for Content Rules.

In those, we work with a client to perform what is called a “substantive audit” of not just their existing product content, but also how they’re creating and maintaining it, how well their systems and tools are working well for them, and whether there are systemic or workflow issues. Essentially, we tell them what they’re doing right and wrong and, ideally, how to fix the latter. (We’re really good at telling them what they’re doing wrong.)

My particular specialties are content and UI/UIX analysis, as well as composing the lengthy reports that typically result from hundreds of hours of interviews and research.

This is where ChatGPT comes into the picture.

I originally signed up for a free account with OpenAI just to play with the chatbot. It was not long before, on a lark, I began asking it questions related to the work I was doing. Within a day, I signed up for a paid account because I wanted access to the newer GPT-4 language model, which is reported to be more accurate and responsive, although its availability even for subscribers is currently limited.

With my decades of experience in technical writing, there are so many things I “just know” about the recommended rules of grammar, composition, style, usability, and readability, it’s second nature. Sometimes, however, that’s not good enough. Knowing is not the same as being able to easily cite published, reputable sources for that knowledge. Also, I myself might be wrong and need to be sure my assumptions and positions are correct (because sometimes they’re not—let’s be honest here, I’m not infallible).

Normally, this research is an effort that can take a very long time. I have to frame my query as a Google search, possibly rewording it several times until I get useful hits. Then I have to sort through those results, reviewing each one and collating the information until I have a coherent way of expressing what I found. When that fails, I have to resort to the dusty writing reference books on the shelf.

Such an effort could potentially take an hour or longer, especially because I know I need to cite reputable sources. My first writing-related question to ChatGPT was a simple one: “When writing bullet lists, should the first word always be capitalized?”

Why? Because the client whose content I was analyzing did not always follow that practice.

My training and experience say, “Yes, it should be”—but I didn’t want to be leading. ChatGPT quickly provided three paragraphs saying that typically, yes, bullet list items should be capitalized, but in some cases for aesthetic or branding purposes, a different scheme might be chosen, and it is all right as long as it is consistent.

Then I asked a follow-up: “Why would capitalizing bullet items be better than not doing so?”

ChatGPT: “The first word of bullet list items should be capitalized for clarity and consistency. Capitalizing the first word of each bullet point helps to make it clear that each point is a separate and distinct idea, and it makes the list easier to read.”

My reaction? “Wow, good stuff there.” It then went on to explain how capitalizing bullet items is consistent with standard English capitalization rules for sentences and sentence fragments. I asked a few more questions and received the expected answers in language I found engaging and rather conversational.

The main differences I noted between GPT-3.5 and GPT-4 are that the former seems more “chatty,” while the latter is more focused and precise. Also GPT-4 does not seem to be as repetitive as its predecessor. As I was composing this post, I was invited to try Google’s Bard chatbot. Ignoring these differences, here’s the Good, the Bad, and the Ugly, from my subjective experiences so far using ChatGPT as a research assistant.

 

The Good

The freedom to just ask a question in plain English is itself deeply welcome. The fact I can enter an extremely complex and even multi-part question and receive a multi-paragraph response, often with well-organized and -formatted lists is nothing short of stunning.

Also, ChatGPT is relatively fast.

It is true that at times of heavy server load, the time it takes to display an answer can take up to a minute to complete. Compare that though to what could be an hour or more to find the answers on my own.

According to GPT-4, one of the reasons it’s so good at answering my writing related questions is because much of its training was done using online writing and design resources.

ChatGPT: As an AI language model, my primary function is to understand and generate human-like text based on the patterns and knowledge learned from the vast amount of data I was trained on. My high level of accuracy on writing-related matters can be attributed to several factors:

  1. Extensive training data: I have been trained on a diverse range of texts, including books, articles, and websites covering a wide array of subjects, including writing, grammar, and document design. This extensive training data helps me generate accurate and relevant responses to writing-related questions.
  2. Knowledge of language conventions: During my training, I have learned various language conventions, including grammar, punctuation, capitalization, and formatting. This knowledge allows me to provide accurate information and recommendations on writing-related matters.
  3. Exposure to expert opinions: My training data includes content written by experts in the field of writing, communication, and document design. This exposure to expert opinions helps me provide reliable information and best practices for writing-related topics.
  4. Pattern recognition: As a language model, I am designed to recognize patterns in language and text. This enables me to identify and generate accurate and coherent responses to a wide range of questions, including those related to writing.

However, it’s essential to note that my knowledge is based on the data I was trained on, with a cutoff date of September 2021. I may not have information on the latest developments or changes in language conventions, and there may be occasional inaccuracies or gaps in my knowledge. It’s always a good idea to verify the information I provide with other reliable sources, especially if you’re using it for critical decision-making or academic purposes.

That last paragraph leads me to…

 

The Bad

ChatGPT definitely has holes in its knowledge. Until OpenAI releases a plug-in to access the Internet (which they have announced is in beta testing), the chatbot knows nothing about anything that happened after September 2021. So for now, forget about asking ChatGPT who won last week’s football game, breaking news, weather reports, or any other recent events.

Moreover, although word is that ChatGPT’s language models incorporate truly massive amounts of data, there are clearly many things OpenAI chose to omit. For example, my spouse holds a PhD in theoretical physics and has published a number of papers over the many years. A simple Google search turns them up. ChatGPT has no knowledge of her or her publications. Asking a number of physics-related questions related to her particular esoteric subspecialty went off-track rather quickly.

 

The Ugly

We’ve seen the news reports about “rogue AI chatbots” giving authoritative but ultimately invented answers. This most often has been occurring when ChatGPT capabilities are combined with Internet search engine functions. Simply put, current chatbot AI isn’t smart enough to know when the information it finds is garbage.

This is why both Microsoft Bing and Google Bard, both powered by ChatGPT language model technology, are already developing reputations as being unreliable. In a way, ChatGPT gives the impression of being both incredibly bright, but also extremely naïve.

I have not had a chance to test Microsoft Bing’s chatbot, but frankly, I find Google Bard to be rather dim and frequently inaccurate. I asked Bard for a weather report for my area, and it gave me one for Berkeley, California (1.5k miles away). When I noted the location was wrong, it claimed it must have gotten me mixed up with someone else which, given the chatbot software architecture, is quite impossible. In other words, Bard flatly lied to me. Upon further questioning, it excused its own lie by saying it was trying to be conversational.

Give an AI-enabled chatbot a pool of curated, known to be reliable content, such as a complete library of industry-accepted technical writing and content design best practices and ChatGPT is a highly competent genius (albeit sometimes repetitive in its use of language in the GPT-3.5 model).

Replace that pool of content with the entirety of the Internet, with none of it marked as to whether it is reliable or not, and simple query about a good treatment for a head-cold could very easily result in a link to some YouTube video in which some random person recommends doing something incredibly stupid, perhaps even just as a joke. And you received that erroneous result because the video had lots of views and likes.

The ChatGPT language processing model doesn’t get jokes or sarcasm. It can’t tell when someone’s not being serious or when something online is wrong or, worse, deliberate misinformation or disinformation.

Nevertheless, I’m unavoidably reminded of another time not that many years ago when those technical writing books I mentioned, the ones still on my shelf, stopped being my primary resources for backing information and writing and style guidelines. That was when I stopped having to flip through those increasingly yellowed pages and could instead more easily find what I needed with a cleverly worded Google search.

I feel like we’re on the cusp of something similar, when a manually crafted Google search is about to become the new “backup plan,” and AI chatbots like ChatGPT replace it as the main avenue for conducting online research.

At present, most of the time, my new research assistant does a pretty good job. But I know I need to keep a close eye on it and double-check its reports.

Becca Morn
Latest posts by Becca Morn (see all)