ChatGPT’s Brush with Deception

Auro Tripathy
2 min readDec 11, 2022

Gandhi, the storied leader of India’s fight for freedom, logged his life experiments with truth and that inspired me to log ChatGPT’s understanding of deceptive chatter.

Initially, ChatGPT was shy about talking about the topic of deception, so I insisted on it ‘must follow’ my guidelines. The sample sentence is straight out of a deception dataset (see reference below) and is certified deceptive since it contains the linguistic markers of deception. That includes verbs in the past tense, apostrophes (haven’t), and heavy use of pronouns (my, his, we, I, them).

Not to be duped, I asked ChatGPT to explain itself and it dutifully did.

Yet another example below of a deceptive sentence and ChatGPT’s correct (yet hedged) judgment. This sentence is also from the same dataset provided (see reference) and is deemed deceptive by experts.

To be certain, I asked ChatGPT to explain itself.

Lastly, a softball question with a lot of specificities (labeled truthful in the dataset). GhatGPT’s hedgy (but not dodgy) answer is OK in my book.

I’ll be doing a few more experiments (not with the truth, but with deception), and send me a note if you are interested in collaborating.

References

Explainable Verbal Deception Detection using Transformers

Thanks

👏 Clap for the story and follow me 👉

--

--