‘Godfathers of AI’ warn robots ‘not yet safe’ and could wreak havoc on society

  • Bookmark
  • Never miss any of the fun stuff. Get the biggest stories and wackiest takes from the Daily Star, including our special WTF Wednesday email

    Thank you for subscribing!

    Never miss any of the fun stuff. Get the biggest stories and wackiest takes from the Daily Star, including our special WTF Wednesday email

    We have more newsletters

    "Utterly reckless" tech giants churning out ever more powerful bots before humankind knows how to "make them safe" should be held liable for any Terminator-style harm they cause, boffins say.

    Powerful artificial intelligence systems could wreck society, according to a host of experts including two "godfathers" of the technology. Big tech companies that ignore the dangers should be held accountable for the damage their bots do, the experts say.

    The call came ahead of next week’s Bletchley Park summit on AI safety set to be attended by international politicians, tech firms and academics. Stuart Russell, a professor of computer science at the University of California, Berkeley, US, has co-authored a policy proposal from 23 experts slamming the continued production of more powerful AI until they have worked out how to stop bots wreaking havoc.

    READ MORE: Robot 'pastors' hold AI church services as thousands tune into online sermons

    For the latest artificial intelligence updates, click here.

    He said "sandwich shops" face tougher regulation than tech giants. "It's time to get serious about advanced AI systems," he said. "These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless.

    "There are more regulations on sandwich shops than there are on AI companies."

    The document urges governments to adopt a range of policies including allocating one-third of their AI research funding and companies a third of their resources to the safe and ethical use of systems. Independent auditors should be allowed access to laboratories.

    Cutting-edge bots should be licensed before they are built. AI companies must adopt specific safety measures if dangerous capabilities are found in their systems. And all tech companies must be held liable for foreseeable and preventable harm their systems cause.

    • 'Humans are secret alien race and Elon Musk is our overlord,' Google's AI claims

    Co-authors of the document include Geoffrey Hinton and Yoshua Bengio, two of the three "godfathers of AI" who won the ACM Turing award – the computer science equivalent of the Nobel prize – in 2018.

    Hinton quit Google this year to warn about what he called the "existential risk" posed by digital intelligence. Bengio, a professor of computer science at the University of Montreal, Canada, joined him and thousands of other experts in signing a letter in March calling for the suspension of giant AI experiments.

    The report’s authors warned carelessly developed bots threaten to "amplify social injustice, undermine our professions, erode social stability, enable large-scale criminal or terrorist activities and weaken our shared understanding of reality that is foundational to society." They said AI was already showing signs of worrying capabilities that could soon lead to the emergence of autonomous systems that can plan, pursue goals and "act in the world."

    The GPT-4 AI model that powers the ChatGPT tool – developed by the US firm OpenAI – can design and execute chemistry experiments, browse the web and use software tools including other bots, the experts said. "If we build highly advanced autonomous AI we risk creating systems that autonomously pursue undesirable goals," they said.

    Other recommendations include mandatory reporting of incidents when bots display alarming behaviour, measures to stop dangerous models from replicating themselves and giving regulators power to halt the development of dangerous systems. Next week’s summit will focus on existential threats posed by AI such as aiding the development of bioweapons and evading human control.

    Some experts argue the threat to humans is overblown. Yann LeCun, chief AI scientist at Meta, has said the notion AI could exterminate humans was "preposterous." But authors of the summit’s policy document argue if advanced autonomous AI systems emerge now the world would not know how to make them safe.

    For the latest breaking news and stories from across the globe from the Daily Star, sign up for our newsletter by clicking here.

    • Artificial Intelligence
    • Science

    Source: Read Full Article