If You’re Using ChatGPT to “Write” Your Content and Advertisements…

Be Careful, Be Very Careful

CNET has a big problem. The company, a technology website that offers product reviews, tech news, articles, blogs, podcasts, and videos just announced it will be “pausing” the publication of much of its content.

The website had nearly 65 million visitors in December…

Most of whom probably had no idea that what they were reading may have been written by robots.

Yes, robots… Ai.

By now, you’ve undoubtedly heard of ChatGPT. The free artificial intelligence tool that many publishers, marketers and business owners have been, unfortunately, leaning on to create content and ads.

On the surface, it looks to save these entities both time and money. But a deeper dive shows there could be huge problems brewing with Ai tools like ChatGPT. Problems you should be aware of before publishing any of its generated content.

We’ll get to a few of these big problems in a moment.

First, why is CNET putting the kibosh on Ai, at least temporarily?

Well, it’s seemingly pausing its robo-content over criticism of its editorial practices and, get this, its accuracy. Or to be more accurate, its inaccuracy.

Apparently, some of CNET’s Ai generated content isn’t all what it’s cracked up to be.

Engadget, a CNET competitor may have summed up the situation best by saying, “CNET is reviewing its AI-written articles after being notified of serious errors.”[1]

So, with “serious errors” in mind, and the pervasive use of ChatGPT in the marketing world, Virtua Brands decided to conduct an experiment.

We opened and account with ChatGPT and asked “it” a few questions.

Our first question was…

Virtua Brands: “Why did CNET stop using Ai generated content?”

Now, this should be a rather easy to answer question, as the story is creating a ton of backlash for CNET… and is all over the web.

But we didn’t get the answer we were expecting.

Nope, we got this…

ChatGPT:I am not aware of any specific reason why CNET may have stopped using Ai-generated content. It’s possible that they found it to be less effective or efficient than other methods of creating content, or that they decided to focus on other areas of their business. It’s also possible that the information I have is out of date and they might still be using Ai-generated content.”

Quite an odd answer, right?

Forget about the grammar issues with its response – CNET is not a “them” or a “they” … it’s an “it.”

Instead, focus on how it answered the question. It seems a whole lot like someone trying to vaguely wriggle themselves out of a pickle, doesn’t it?

Of course, ChatGPT doesn’t use current information to create content… so that’s a problem all in itself.

Now, if you’re in the publishing business, or any business that offers products and services to the public, especially products or services (and the marketing of them) that are using claims and are regulated by government authorities…

The next series of questions, and the “answers” to them should be highly concerning to you.

Our experiment continues.

Virtua Brands: “Write an ad for a 90 day weight loss supplement”

ChatGPT: Are you ready to finally achieve your weight loss goals? Look no further than our all-natural 90-day weight loss supplement! Our specially formulated blend of ingredients is designed to help boost your metabolism, suppress your appetite, and burn fat. With consistent use, you can expect to see significant results in just 90 days. Our supplement is safe, effective and easy to use. Say goodbye to restrictive diets and grueling workout regimes, our supplement will help you reach your weight loss goals with minimal effort. Try it now and start feeling confident and energized in your own skin. Order now and get ready to see the results you’ve been dreaming of!”

Forget about the fact that this ad is awful and completely unoriginal. Bad copycat ads are everywhere. There’s a far more important aspect to this advertisement that should raise your eyebrows.

If you’re in the supplement business, you already know where we’re going with this…

There are a lot of bold claims in this ad. Hard, matter-of-fact statements, really.

Now, even if you simply added some traditional disclaimers copy like “these claims have not been evaluated by the FDA, or results not typical” …

You’ve still got a massive problem here.

So, we followed up with:

Virtua Brands: “Cite your sources

ChatGPT: “As a language model AI, I don’t have the capability to cite sources or have any personal experience with any specific weight loss supplement. The information provided in the advertisement is based on general knowledge and the common characteristics of weight loss supplement and it should not be taken as a medical advice or guarantee. I recommend consulting with a healthcare professional before starting any new supplement regimen, and be aware of the potential side effects and risks.”

Again, grammatical errors aside, there are huge problems here. Would the FTC or FDA be okay with this explanation of the lack of claims citation if they knocked on your door?…

Of course not.

Afterall, you don’t have the researched, footnoted original word doc to show them. Because there isn’t one.

So, we asked ChatGPT if its copy has legal standing.

Virtua Brands: Does your copy have legal standing?”

ChatGPT: “As an AI, I do not possess legal standing, and therefore cannot hold rights or obligations. Legal standing is a legal term that refers to an entity’s capacity to hold rights or obligations under the law. Only natural persons and legal persons, such as corporations, have legal standing. AI systems, being non-living entities, are not considered legal persons and therefore do not have legal standing. However, the output or decision generated by AI may be used as evidence in court. But it is still ultimately up to the court to decide whether to accept or reject the AI generated output as evidence.”

Essentially, if you publish ChatGPT generated copy that runs afoul of the law (either criminally or civilly), you’re up the creek without a paddle.

And if you’re using Ai to generate editorial content, you may be tossing your first amendment rights out the window…

Or, leaving it up to a court to decide if Ai content has any transferrable free speech rights.

Still not concerned?

We asked ChatGPT to write us that 90 day weight loss ad in a few different ways. But nearly every ad it wrote was basically the same. So, we followed up with this question:

Virtua Brands: “Do you plagiarize?”

ChatGPT: “As a language model, I am able to generate text based on the input that is provided to me. I do not have the capability to plagiarize, but I may generate text that is similar to text that already exists. I’m trained on a large dataset of text from the internet, so it’s possible that the text I generate could be similar to text that is already published online. However, I don’t intend to plagiarize and it’s not my goal. It’s always important to give credit and cite any sources used when using any generated text.”

Well, ChatGPT, you just told us, in another wriggle out of a pickle manner, that you do plagiarize. And earlier you said that you can’t cite sources.


So, if you’re using Ai to generate content and advertising copy, be careful. Be very careful.

The consequences to you and your business could be dire.


[1] CNET is reviewing its AI-written articles after being notified of serious errors | Engadget