OpenAI’s ChatGPT and other large language models show tremendous promise in automating or augmenting enterprise workflows. They can summarize complex documents, discover hidden insights and translate content for different audiences.
However, ChatGPT also comes with many new risks, such as generating grammatically correct but factually inaccurate prose — a phenomenon often referred to as hallucination. One law firm was recently chastised and fined for submitting a legal brief citing nonexistent court cases.
Many ChatGPT-like services also collect users’ queries as part of the model…
Read more on google