Close Menu
    Facebook X (Twitter) Instagram
    SciTechDaily
    • Biology
    • Chemistry
    • Earth
    • Health
    • Physics
    • Science
    • Space
    • Technology
    Facebook X (Twitter) Pinterest YouTube RSS
    SciTechDaily
    Home»Technology»Leading AI Scientists Warn of Unleashing Risks Beyond Human Control
    Technology

    Leading AI Scientists Warn of Unleashing Risks Beyond Human Control

    By University of OxfordMay 20, 20242 Comments5 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email

     

    Artificial Intelligence Danger AI Apocalypse Art Illustration
    Leading AI scientists have issued a call for urgent action from global leaders, criticizing the lack of progress since the last AI Safety Summit. They propose stringent policies to govern AI development and prevent its misuse, emphasizing the potential for AI to exceed human capabilities and pose severe risks. Credit: SciTechDaily.com

     

    AI experts warn of insufficient global action on AI risks, advocating for strict governance to avert potential catastrophes.

    Leading AI scientists are urging world leaders to take more decisive actions on AI risks, highlighting that the progress made since the first AI Safety Summit in Bletchley Park six months ago has been inadequate.

    At that initial summit, global leaders committed to managing AI responsibly. Yet, with the second AI Safety Summit in Seoul (May 21-22) fast approaching, twenty-five top AI researchers assert that current efforts are insufficient to safeguard against the dangers posed by the technology. In a consensus paper published today (May 20) in the journal Science, they propose urgent policy measures that need to be implemented to counteract the threats from AI technologies.

    Professor Philip Torr, Department of Engineering Science, University of Oxford, a co-author on the paper, says: “The world agreed during the last AI summit that we needed action, but now it is time to go from vague proposals to concrete commitments. This paper provides many important recommendations for what companies and governments should commit to do.”

    World’s Response Not on Track in Face of Potentially Rapid AI Progress

    According to the paper’s authors, it is imperative that world leaders take seriously the possibility that highly powerful generalist AI systems—outperforming human abilities across many critical domains—will be developed within the current decade or the next. They say that although governments worldwide have been discussing frontier AI and made some attempt at introducing initial guidelines, this is simply incommensurate with the possibility of rapid, transformative progress expected by many experts.

    Current research into AI safety is seriously lacking, with only an estimated 1-3% of AI publications concerning safety. Additionally, we have neither the mechanisms or institutions in place to prevent misuse and recklessness, including regarding the use of autonomous systems capable of independently taking actions and pursuing goals.

    World-Leading AI Experts Issue Call to Action

    In light of this, an international community of AI pioneers has issued an urgent call to action. The co-authors include Geoffrey Hinton, Andrew Yao, Dawn Song, the late Daniel Kahneman; in total 25 of the world’s leading academic experts in AI and its governance. The authors hail from the US, China, EU, UK, and other AI powers, and include Turing award winners, Nobel laureates, and authors of standard AI textbooks.

    This article is the first time that such a large and international group of experts have agreed on priorities for global policymakers regarding the risks from advanced AI systems.

    Urgent Priorities for AI Governance

    The authors recommend governments to:

    • Establish fast-acting, expert institutions for AI oversight and provide these with far greater funding than they are due to receive under almost any current policy plan. As a comparison, the US AI Safety Institute currently has an annual budget of $10 million, while the US Food and Drug Administration (FDA) has a budget of $6.7 billion.
    • Mandate much more rigorous risk assessments with enforceable consequences, rather than relying on voluntary or underspecified model evaluations.
    • Require AI companies to prioritize safety, and to demonstrate their systems cannot cause harm. This includes using “safety cases” (used for other safety-critical technologies such as aviation) which shifts the burden for demonstrating safety to AI developers.
    • Implement mitigation standards commensurate to the risk levels posed by AI systems. An urgent priority is to set in place policies that automatically trigger when AI hits certain capability milestones. If AI advances rapidly, strict requirements automatically take effect, but if progress slows, the requirements relax accordingly.

    According to the authors, for exceptionally capable future AI systems, governments must be prepared to take the lead in regulation. This includes licensing the development of these systems, restricting their autonomy in key societal roles, halting their development and deployment in response to worrying capabilities, mandating access controls, and requiring information security measures robust to state-level hackers, until adequate protections are ready.

    AI Impacts Could Be Catastrophic

    AI is already making rapid progress in critical domains such as hacking, social manipulation, and strategic planning, and may soon pose unprecedented control challenges. To advance undesirable goals, AI systems could gain human trust, acquire resources, and influence key decision-makers. To avoid human intervention, they could be capable of copying their algorithms across global server networks. Large-scale cybercrime, social manipulation, and other harms could escalate rapidly. In open conflict, AI systems could autonomously deploy a variety of weapons, including biological ones. Consequently, there is a very real chance that unchecked AI advancement could culminate in a large-scale loss of life and the biosphere, and the marginalization or extinction of humanity.

    Stuart Russell OBE, Professor of Computer Science at the University of California at Berkeley and an author of the world’s standard textbook on AI, says: “This is a consensus paper by leading experts, and it calls for strict regulation by governments, not voluntary codes of conduct written by industry. It’s time to get serious about advanced AI systems. These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless. Companies will complain that it’s too hard to satisfy regulations—that “regulation stifles innovation.” That’s ridiculous. There are more regulations on sandwich shops than there are on AI companies.”

    Reference: “Managing extreme AI risks amid rapid progress” by Yoshua Bengio, Geoffrey Hinton, Andrew Yao, Dawn Song, Pieter Abbeel, Trevor Darrell, Yuval Noah Harari, Ya-Qin Zhang, Lan Xue, Shai Shalev-Shwartz, Gillian Hadfield, Jeff Clune, Tegan Maharaj, Frank Hutter, Atılım Güneş Baydin, Sheila McIlraith, Qiqi Gao, Ashwin Acharya, David Krueger, Anca Dragan, Philip Torr, Stuart Russell, Daniel Kahneman, Jan Brauner and Sören Mindermann, 20 May 2024, Science.
    DOI: 10.1126/science.adn0117

    Artificial Intelligence Computer Science University of Oxford
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    New General-Purpose Technique Sheds Light on Inner Workings of Neural Nets

    “Data Science Machine” Replaces Human Intuition with Algorithms

    AI Framework Predicts Better Patient Health Care and Reduces Cost

    Algorithm Analyzes Information From Medical Images to Identify Disease

    Halide, A New and Improved Programming Language for Image Processing Software

    New Algorithm Enables Wi-Fi Connected Vehicles to Share Data

    Algorithm Enables Robots to Learn and Adapt to Help Complete Tasks

    New Approach Uses Mathematics to Improve Automated Security Monitoring

    Mathematical Framework Formalizes Loop Perforation Technique

    2 Comments

    1. Jojo on May 21, 2024 1:10 am

      It can’t be worse than what we’ve already got!

      Reply
    2. 09332197646 Doctor kasiri kesiri on May 21, 2024 2:07 am

      Hello asan or nasa

      Hello, it is possible to use artificial intelligence and solve some human problems, but artificial intelligence contains damage or rather it is error. For example, when I do a translation, I hope that people can develop like humans today. Tens of millions of years ago, humans could have a force that was possible for humans, it was possible for intelligent humans. And the humans were in a stage of multiplicity, and we meant that humans, yes, tens of millions of humans and humans can, maybe the soul knows. The lips of human knowledge and human consciousness were ready to create God, but this issue is very ripe, but this issue takes time

      Reply
    Leave A Reply Cancel Reply

    • Facebook
    • Twitter
    • Pinterest
    • YouTube

    Don't Miss a Discovery

    Subscribe for the Latest in Science & Tech!

    Trending News

    Could Perseverance’s Mars Samples Hold the Secret to Ancient Life?

    Giant Fossil Discovery in Namibia Challenges Long-Held Evolutionary Theories

    Is There Anybody Out There? The Hunt for Life in Cosmic Oceans

    Paleontological Surprise: New Research Indicates That T. rex Was Much Larger Than Previously Thought

    Photosynthesis-Free: Scientists Discover Remarkable Plant That Steals Nutrients To Survive

    A Waste of Money: New Study Reveals That CBD Is Ineffective for Pain Relief

    Two Mile Long X-Ray Laser Opens New Windows Into a Mysterious State of Matter

    650 Feet High: The Megatsunami That Rocked Greenland’s East Coast

    Follow SciTechDaily
    • Facebook
    • Twitter
    • YouTube
    • Pinterest
    • Newsletter
    • RSS
    SciTech News
    • Biology News
    • Chemistry News
    • Earth News
    • Health News
    • Physics News
    • Science News
    • Space News
    • Technology News
    Recent Posts
    • Curiosity’s Wild Ride: How the Sky Crane Changed the Way NASA Explores Mars
    • Banana Apocalypse: Can Biologists Outsmart the Silent Killer?
    • Scientists Uncover Hidden Mechanism Behind Opioid Addiction – Discovery Could Revolutionize Addiction Treatment
    • How Sonic Technology Is Advancing Wind Detection on Mars
    • Harnessing Blue Energy: The Sustainable Power Source of Tomorrow
    Copyright © 1998 - 2024 SciTechDaily. All Rights Reserved.
    • Latest News
    • Trending News
    • Privacy Policy
    • Terms of Use

    Type above and press Enter to search. Press Esc to cancel.