AI text generators like ChatGPT, Bing AI chatbot and Google Bard have been getting a lot of attention lately. These large language models can create impressive pieces of writing that seem totally legit. But here’s the twist: a new study suggests that we humans might be falling for the misinformation they generate.
To investigate this, researchers from the University of Zurich ran an experiment to see if people could tell the difference between content written by humans and those churned out by GPT-3 that was announced in 2020 (not as advanced as GPT-4 rolled out earlier this year). The…
Read more on google