It has been widely reported that ChatGPT, a popular artificial intelligence chatbot, sometimes makes things up in response to user inquiries, and that these “hallucinations” (most people would call them lies or misinformation) can be misleading. But it wasn’t until I tried it for myself that I realized the magnitude of the problem. After all, it was possible that the misinformation was minor—perhaps not reliable enough for direct cutting and pasting into a document, but more or less correct.
Alas, this was not the case. Below I present a direct transcript of my interactions with…
Read more on google