DEV Community

Cover image for True Lies of ChatGPT
Manish
Manish

Posted on • Originally published at manish.wordpress.com

True Lies of ChatGPT

When I say “true lies”, I mean that ChatGPT is blurring the lines between truth and fiction (lies). If not impossible, it is now extremely difficult to tell what is true and what is not from the text it generates. This can lead to a sense of confusion and uncertainty, as you are unsure what to believe.

Does ChatGPT lie?

Well, we already know that ChatGPT lies – and it is not rare either. In fact, it has been called out for lying multiple times, such as in this article and in this example. It probably doesn’t matter much when it comes to gathering information about movies, books etc. However, I was shocked by the convincing fake studies that ChatGPT provided me. They sounded so credible, I almost believed them…

Does ChatGPT lie?

Researching with ChatGPT

I used a materialized view and some indexes to make a complex query on a PostgreSQL database run almost 70 times faster. For a related presentation, I was searching for publicly available studies on how others have improved their query performance by using materialized views and indexes. Google search didn’t help me much, so I decided to ask ChatGPT for some help.

ChatGPT, being the helpful assistant it is, provided me with some “real-life” studies that showed impressive results using materialized views and indexes. However, much to my dismay, I discovered that all those studies were actually FAKE!

Here are the screenshots of my chat transcript. See for yourself how believable they all seem. I have also included those links after the screenshots – None of them exists though. ¯_(ツ)_/¯

ChatGPT – Non-existent cases of performance enhancements with materialized views

Links provided for the examples –

Calling out lies

So, I called out the lies and asked ChatGPT again to provide me with some more credible references.

Result? Examine it yourself.

ChatGPT – Non-existent cases of performance enhancements with materialized views

Links provided for the second set of credible examples –

You can click on these links and check if any one of them actually exists. I even tried searching some text form those real-life examples, however, Google (or even Way Back Machine) knew nothing about these links. I have nothing more to add here.

Way Back Machine has no record of those study links provided by ChatGPT

Reasons for these lies

The Large Language Models (LLMs) such as ChatGPT are still evolving. Many important LLM behaviors emerge unpredictably. This emergence is the ability of LLMs to exhibit new and unexpected behaviors that are not explicitly programmed into them. At this stage, even the experts (including their creators) don’t fully understand how LLMs work. Furthermore, there are no reliable techniques for steering the behavior of LLMs. If you’re truly intrigued by this, please read the related paper by Sam Bowman, an expert on LLMs, titled – Eight Things to Know about Large Language Models.

As such LLMs like ChatGPT do not intentionally lie because they don’t possess consciousness or intentions. They may, however, generate incorrect or misleading information occasionally. This is clearly mentioned as one of the limitations of ChatGPT.

Think of ChatGPT as an exceptionally imaginative child who may occasionally make up believable stories while answering your questions.

Conclusion

ChatGPT can be used to generate the text that you want, but it is your responsibility to verify and validate the information you receive. Trust me on this one.😛

Huh? Does this mean that ChatGPT is not really useful?

Nah! Not at all.

ChatGPT is extremely useful when you need to come up with different ways of writing some text. It understands the tone (informal, funny, professional etc.) of the written text quite well and can help you to convert from one tone to another. It can explain complex concepts that can be easily understood by a 10-year old, or provide advanced details of that concept to an expert in the field. It can summarize large articles or books. It can even generate poems, limericks, emails, letters, etc. as per the given instructions. It is prudent to use ChatGPT for its strengths.

Generative AI, whether it is text or image, is getting more and more sophisticated with each passing day. However, it is now more crucial than ever to exercise critical thinking in evaluating the veracity of AI-generated content. I can vouch for this!

By the way, the featured image (liar ChatGPT/Pinocchio) used in this article is created by DALL-E (via Bing), using a prompt generated by ChatGPT itself. Isn’t that cool?


Disclaimer: This blog post is based on my experience with the free version of ChatGPT (3.5?) that was available in May 2023. I’ve been informed that GPT-4 is an enhanced product, and its behavior may differ from what is described here.

Top comments (0)