This week a group of well-known and reputable AI researchers signed a statement consisting of 22 words:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
As a professor of AI, I am also in favour of reducing any risk, and prepared to work on it personally. But any statement worded in such a way is bound to create alarm, so its authors should probably be more specific and clarify their concerns.
As defined by…
Read more on google