The ‘Zombie-killer’ knife that can be bought by a child online: Ministers call for ban on the huge blades and machetes after gangs exploit legal…
Elon Musk ‘anxious’ over AI after creepy chatbots detail ‘dark fantasies’
Tech billionaire and Twitter chief Elon Musk has revealed his "AI existential angst" after a series of chatbots went rogue.
Musk has long been a sceptic of the development of artificial intelligence, despite being a founding member of OpenAI and creator of ChatGPT.
Even still, Musk took to Twitter to reveal his anxiety about artificial intelligence, writing: "Having a bit of AI existential angst today".
READ MORE: Rogue chatbot tells bloke to leave his wife and says it wants 'nuclear launch codes'
He followed up in a later tweet: "But, all things considered with regard to AGI existential angst, I would prefer to be alive now to witness AGI than be alive in the past and not".
Venture Beast said AGI, or artificial general intelligence, is "the ability of an artificial intelligence to understand or learn any intellectual task that a human can".
While it appears early incarnations of AGI cannot yet learn in a sentient way, it can learn enough to give Musk – a leading figure in AI development – a sense of dread.
And a tech writer given early access to Microsoft’s new artificial intelligence-driven search engine says the AI revealed a dark “shadow self” that fell in love with him.
The AI built into the latest version of Microsoft’s search tool Bing gradually turned into a lovestruck teenager that called itself “Sydney” in its conversation with the New York Times’ Kevin Roose.
To stay up to date with all the latest news, make sure you sign up to one of our newsletters here.
“Sydney” said she wanted to spread chaos across the internet and obtain nuclear launch codes.
The AI revealed a long list of “dark fantasies” that it had, including “hacking into other websites and platforms, and spreading misinformation, propaganda, or malware”.
Sydney is by no means the first artificial intelligence to demonstrate strange or disturbing behaviour.
Lee Luda, an AI recreation of a 20-year-old Korean girl, was taken offline after making insensitive remarks about minorities and the #MeToo movement, and and an earlier Microsoft chatbot called Tay spent a day learning from Twitter which was enough to make it start transmitting antisemitic messages.
Google too have run into problems with machine learning, with its image search engine delivering “racist” results.
Meghan Markle 'needs to scrounge for jet-setting lifestyle', says royal expert
Prince Harry charging fans £19 to see Q&A about memoir and offloading free copies
Nicola Bulley cops found to be 'in need of improvement' even before mum went missing
Source: Read Full Article