OpenAI is trying to stop ‘AI hallucinations’ where ChatGPT just makes stuff up

Published: 2023-06-01T10:07:10

  ❘   Updated: 2023-06-01T10:07:20

After some prominent AI hallucinations landed in a court case, OpenAI has come forward saying that the team is trying to figure out a new method of training ChatGPT.

In a new research paper, OpenAI stated that the company would begin a new method of training to avoid hallucinations. The new process would involve “process supervision”.

Essentially, “process supervision” works with the AI’s trail of thought and “rewards the model for following an aligned chain-of-thought”. While it…

Read more on google

About bourbiza mohamed

Check Also

Tech companies try to take AI image generators mainstream with better protections against misuse

Artificial intelligence tools that can conjure whimsical artwork or realistic-looking images from written commands started …

Leave a Reply

Your email address will not be published. Required fields are marked *