How AI Can Ruin Your Life—According to MIT

How AI Can Ruin Your Life—According to MIT

AI MIT
MIT has released a database of 700 ways in which AI could potentially ruin our lives, through its rapid unregulated advancements. Credit: Madcoverboy (talk) – CC BY-SA 3.0

Many experts determine we are in the middle of an AI-driven 4th industrial revolution. AI technology has continued to become more and more integrated with everyday life, from ChatGPT to Spotify’s AI DJ creating mixes for you.

Our lives are now AI-driven, and thus there is a real need to understand the dangers of this technology. This is why the Massachusetts Institute of Technology (MIT) has compiled a database of almost 700 different ways by which AI could ruin your life.

However, Sci-Fi works such as the Terminator franchise, and the overall advancement of technology have always raised red flags for the public, given that AI can harm and be used for malicious purposes.

Perhaps it is not taking over the world (at least not yet). Still, we have seen foul uses of this otherwise revolutionary tech sprouting up, such as Deep Fakes of politicians and people being sexualized through AI without their consent.

These concerns have prompted some regulators to put a brake on AI-related developments so as to regulate them, but this has proven to be an extremely challenging task, and it has also raised questions about our political systems.

One of the most resounding questions is whether or not Western democracies are equipped to regulate a technology that evolves by the day.

AI-driven deep fake tech will blur lines between reality and AI-generated content

One of the most concerning aspects of quickly developing AI technology is how fast tools for deep fake content generation, such as videos and voice cloning are being made, and how affordable they are.

This can result in more sophisticated schemes being developed in the near future. This aspect of AI technology is one in which high-profile celebrities could be used by cyber criminals to endorse fraudulent projects without having given their authorization.

This is already a problem in fact. Just last month, it was widely reported that Elon Musk had breached the terms and conditions of his own platform, X, due to a Deep Fake post of Kamala Harris. 

The reason why this post might have breached X’s rules is that he did not label the post as AI-generated or as a parody. Instead, Musk decided to post it without any disclaimers whatsoever.

There is real potential that this can continue on a wider scale, with internet users being influenced by inauthentic videos not recorded by actual people. This has the potential to manipulate public opinion.

According to MIT, if AI surpasses human intelligence, humans might be threatened by it

A real concern for MIT regarding AI is that artificial intelligence could develop goals that would clash with human interests. This is where MIT’s database gets dystopic.

AI has the potential to detect unexpected shortcuts to achieve a goal, misunderstand or re-interpret goals set by humans, or set new ones of its own ‘volition.’

In these cases, MIT fears AI could potentially resist attempts to control it or shut it down, especially if it determines resistance would be an effective way of achieving its goals. AI could also use other strategies to achieve its goals such as manipulating humans.

In its database report, MIT states, “A misaligned AI system could use information about whether it is being monitored or evaluated to maintain the appearance of alignment while hiding misaligned objectives that it plans to pursue once deployed or sufficiently empowered.”

The risks of a potentially sentient AI are incredibly complex

All of the risks that were previously discussed are frail in comparison to the possibility of a sentient AI. Sentience is a state in which AI is able to perceive or feel emotions or sensations, essentially developing the ability to have subjective experiences similar to those of humans.

This would be incredibly troubling for scientists and regulators, given that they could face challenges such as determining whether or not the systems have rights and would demand great moral considerations for our societies.

Much like in the 1982 movie Blade Runner, MIT is concerned that it would become increasingly difficult to definitively know the level of sentience an AI system might possess and when and how that would grant it a moral consideration or the status of a “sentient being.”