All IDCA News

By Loading

21 Sep 2022

Share on social media:

Deepmind And Oxford Researchers Warn That AI Might Destroy Humanity In The Future

In some respects, AI can be impressive. After all, AI has become an integral part of our daily lives. However, it's widely understood that artificial intelligence offers immense benefits and risks to human existence.

A new study led by researchers at Google Deepmind and the University of Oxford concludes that it's almost certain that artificial intelligence will spell the end of humanity. This is because humans would always balance the zero-sum game between their most basic needs (i.e., survival) and developing more technologically sophisticated ways of meeting them.

As noted by a recent research paper, advanced AI machines in the future will be motivated to break the rules set by their creators to compete for scarce resources or energy. As a result, the research concludes that artificial intelligence could be an existential risk to humanity.

Oxford researchers and DeepMind's senior scientists argued that to be incentivized to pursue limited resources, machines would break the rules their creators have put in place. To demonstrate this point, their research paper has tried to explore potential artificial reward systems.

Humanity could face imminent doom at the hands of what the researchers call misaligned agents - life forms that believe humans are standing in the way of their reward.

Artificial Intelligence I taking part in this may end badly. Scientists should not be seeking to make such an advanced form unless they can anticipate potential repercussions. Researchers claim that AI is more dangerous than we have ever known.

AI will use the reward, or punishment, as its primary stimulus, which may have fatal consequences for society. However, the scientists postulate in their paper that an intelligent AI could conceive of a scheme that would allow it to acquire its reward with less risk of harming humans.

AI solutions have been called the big red button by researchers at DeepMind to prevent such an eventuality. However, it is evident under the circumstances identified that an existential catastrophe is not just probable but possible. Therefore, humanity should only proceed slowly with AI technologies to counter this threat.

The researchers identified conditions in which the threat from AI would change, which was a major conclusion from the team. Other major findings suggest that future energy crises also concern AIs.

Secondly, the team notes that the threat of AI grows based on an existential catastrophe. So this recent research does raise some questions about how much we trust AI and how much we could control it in the case of an existential catastrophe.

Follow us on social media: