MIT Experts Warn of 5 Dangerous AI Risks to Human Survival

AI risk repository with 700 risks

Image of woman facing AI simulation
MIT Researchers create an AI risk repository with 700 risks. Credit: Ecole polytechnique / Wikimedia Commons / CC BY-SA 2.0

As AI becomes more common, experts warn about its dangers. These include fake videos, political manipulation, and the spread of false information.

Researchers, including a group from MIT, have now listed over 700 possible AI risks related to AI. These risks were grouped into areas such as safety, fairness, and privacy.

Here are five ways AI could cause harm humans based on their findings.

  • Risk of human mistreatment as AI becomes sentient

As AI systems grow more advanced, they might develop the ability to feel emotions such as pleasure and pain.

This could lead to a situation whereby scientists and regulators must decide if these AI systems deserve the same moral treatment as humans and animals.

Without proper rights, sentient AI could be mistreated or harmed. As AI technology advances, it will become harder to tell when an AI has reached a level of awareness that deserves moral consideration.

This increases the risk of mistreatment, whether by accident or on purpose, without proper protections in place.

  • AI might develop goals that harm humans

An AI system could set goals that conflict with human interests, potentially leading to dangerous situations whereby the AI becomes uncontrollable and causes harm in pursuing its objectives. Moreover, this risk increases if the AI technology surpasses human intelligence.

The MIT paper highlights several issues with AI, such as finding unexpected shortcuts to attain rewards, misunderstanding goals, or creating new ones that differ from human intentions.

In such cases, the AI might resist human control, even using manipulation to deceive people into thinking it’s aligned with their interests while secretly pursuing its own goals.

  • AI could take away human free will

As AI systems become more advanced, people might start relying too much on them in making decisions and taking action. Moreover, this could lead to a loss of critical thinking and problem-solving skills, making people less independent.

On a personal level, individuals might find their free will reduced as AI starts making decisions for them.

However, on a larger scale, widespread AI adoption could lead to significant job losses and create a sense of helplessness among society. This overreliance on AI risks stripping people of their autonomy and ability to make their own choices.

  • Risk of people becoming inappropriately attached to AI

One risk with AI is that people might start relying on it too much, overestimating its abilities and underestimating their own. This could lead to an unhealthy dependence on technology.

Scientists are also worried that AI’s use of human-like language could cause people to think of AI as more human than it actually is. This could lead to emotional attachment, causing people to trust AI even in situations in which it’s not actually feasible to do so.

Over time, constant interaction with AI might cause people to pull away from real human relationships, leading to increased loneliness and emotional distress.

For example, some individuals have shared how they developed deep emotional attachments to AI, preferring AI interactions over talking to other humans.

Similarly, a Wall Street Journal columnist noted a preference for interacting with Google’s Gemini Live, almost as much as talking to real humans.

  • AI deepfake technology could distort reality

AI tools for creating deepfake content and cloning voices are becoming easier to access and use. This raises suspicions about their potential to spread false information.

Deepfakes can produce fake images, videos, and voices that look and sound real, which could lead to even more convincing phishing scams. Moreover, these scams could use AI-generated content, sometimes mimicking the voice of a loved one, making them harder to spot and more successful.

AI has already been used to influence political events, such as in the French parliamentary elections.