Errors From ChatGPT: Hallucinated Whoppers Rather Than Pedantic Subtleties

It has been widely reported that ChatGPT, a popular artificial intelligence chatbot, sometimes makes things up in response to user inquiries, and that these “hallucinations” (most people would call them lies or misinformation) can be misleading. But it wasn’t until I tried it for myself that I realized the magnitude of the problem. After all, it was possible that the misinformation was minor—perhaps not reliable enough for direct cutting and pasting into a document, but more or less correct. 

Alas, this was not the case. Below I present a direct transcript of my interactions with…


Read more on google

About bourbiza mohamed

Check Also

Amazon is investing up to $4 billion in AI startup Anthropic in growing tech battle

Amazon is investing up to $4 billion in Anthropic and taking a minority stake in …

Leave a Reply

Your email address will not be published. Required fields are marked *