Experts have expressed concern because Generative AI is capable of performing wide-ranging functions and can develop their own programs. Humans will lose control leading to catastrophic consequences for creatures and the planet.
In a joint statement, a group of leading AI experts including many pioneering researchers warned of potential threats posed by their own products. All together made a strong statement on May 30, warning of the “danger of extinction” from advanced AI if not managed properly.
The joint statement, signed by hundreds of experts, including the CEOs of OpenAI, DeepMind and Anthropic, aims to overcome the difficulties of publicly discussing the catastrophic risks posed by AI, according to the authors.
The event comes at a time when concern about the impact of AI on society is growing. While companies and governments are working to make leaps and bounds in the capabilities of this artificial intelligence. Top leaders share the same perception of the current state of AI development.
The signatories include some of the most influential figures in the AI industry, such as Sam Altman, CEO of OpenAI; Dennis Hassabis, CEO of Google DeepMind; and Dario Amodei, CEO of Anthropic. These companies are seen as pioneers in AI research and development, making the recognition of potential threats alarming.
The signing of this statement also includes other prominent researchers such as: Yoshua Bengio, a pioneer in deep learning; Ya-Qin Zhang, an outstanding scientist and corporate vice president at Microsoft; and Geoffrey Hinton, known as the “father of deep learning,” who recently left his position at Google to “speak freely” about the existential threat posed by AI.
Hinton’s departure from Google last month drew attention to his views on the capabilities of computer systems he’d spent a lifetime researching. At the age of 75, the famous professor has expressed his desire to engage in sincere discussions about the potential dangers of AI without being bound by business affiliation.
The joint statement follows a similar initiative in March when dozens of researchers signed an open letter calling for a six-month halt to large-scale AI development beyond OpenAI’s GPT-4. Signers of the “pause” letter include such celebrities as Elon Musk, Steve Wozniak, Bengio, and Gary Marcus.
Despite such calls for caution, there is currently no consensus among industry and policy leaders on the best approach to responsibly regulate and develop AI.
Earlier this month, tech leaders, including Altman, Amodei and Hassabis, met with President Biden and Vice President Harris to discuss potential regulations. During a subsequent congressional interrogation, Altman supported government intervention, emphasizing the seriousness of the risks posed by advanced AI systems and the need for regulation to address these damages. potential harm.
In a recent blog post, OpenAI executives made some recommendations for managing AI responsibly. Among their proposals are strengthening collaboration among leading AI researchers, further technical research into large language models (LLMs), and the establishment of an international AI safety organization.