Despite all of the excitement around ChatGPT and similar AI-powered chatbots, the text-based tools still have some serious issues that need to be resolved.
Among them is their tendency to make up stuff and present it as fact when it doesn’t know the answer to an inquiry, a phenomenon that’s come to be known as “hallucinating.” As you can imagine, presenting falsehoods as fact to someone using one of the new wave of powerful chatbots could have serious consequences.
Such trouble was highlighted in a recent incident in which an experienced New York City lawyer cited cases —…
Read more on google