AI Risks: Experts Warn of Potential ‘Extinction’ of Humanity

This latest development adds to the chorus of concerns raised by artificial intelligence experts, highlighting the ongoing debate surrounding exaggerated speculative risks and the growing resistance against an excessive focus on its hypothetical negative impacts.

On Tuesday, the Center for AI Safety, a nonprofit organization based in San Francisco, published a statement signed by nearly 400 prominent individuals in the field of artificial intelligence. The statement highlights the potential AI risks leading to the extinction of humanity and calls for global prioritization in mitigating this risk. Signatories include renowned figures such as Sam Altman, CEO of OpenAI, as well as high-ranking AI executives from Google and Microsoft, along with 200 academics.

The recently released statement adds to a series of concerns raised by AI experts. However, it has also sparked a growing backlash among those who believe that the focus on hypothetical risks from AI is exaggerated. Meredith Whittaker, the president of Signal and chief adviser to the AI Now Institute, a nonprofit organization dedicated to ethical AI practices, criticized the statement, perceiving it as an instance of tech leaders making grandiose claims about their products.

CEO of Hugging Face Edits Statement to Emphasize AGI

Delangue, CEO of Hugging Face, shared an edited version of the statement on Twitter, using “AGI” instead of “AI”. AGI is a theoretical form of AI that is on par with or surpasses human capabilities. This alteration highlights the distinction between general-purpose AI and more specific AI applications.

The recent statement follows a petition signed two months ago by AI and tech leaders, including Musk and Wozniak. The petition called for a “pause” on publicly accessible large-scale AI research. However, these individuals have not yet signed the new statement, and no such research pause has been implemented.

Altman’s Conditional Statement on OpenAI Leaving EU Due to Excessive Regulation

Altman, a proponent of AI regulation, recently made an impression on Congress. He hosted a private dinner with House members and gained bipartisan support during a Senate hearing.

However, Altman’s stance on regulation has its boundaries. He stated that OpenAI might consider leaving the European Union if AI faced excessive regulation. The White House has outlined AI initiatives, but extensive regulation of the industry is not currently planned.

Comprehensive Approach to AI Risk Management

Gary Marcus, AI critic and professor emeritus at NYU, recognizes the genuine risks associated with AI. However, he argues that solely focusing on a hypothetical doomsday scenario can be distracting.

Marcus highlights the risk of literal extinction as just one of many AI-related risks that demand attention. Understanding and addressing these risks comprehensively is important for the responsible development and deployment of AI technologies.

Immediate Threats of AI: Deepfakes and Disinformation

According to some tech experts, the more immediate and commonplace applications of AI pose a greater threat to humanity. Microsoft President Brad Smith, for instance, has expressed concerns about deepfakes and their potential for disinformation.

The impact of deepfakes on society, particularly their potential for spreading false information, is a significant worry for Smith. A fabricated AI-generated image circulated on Twitter, showing an explosion near the Pentagon, causing a brief market decline. The incident highlights the dangers of misusing AI to manipulate content and spread false narratives, causing substantial harm.

Previous articleHong Kong New Crypto Regulations: Things You MUST Know
Next articleChinese Crypto Pioneer Warns Hong Kong’s Crypto Hub Ambitions May Be Short-Lived
Chris Griffin
Chris has had a career as an advisor to the tech industry, incubating start-ups in the tech industry. Welcoming Chris to contribute his expertise covering the latest things he sees in blockchain