GPT is getting a bit strange? Three major events reveal the potential risks of AI going out of control.

robot
Abstract generation in progress

From emotional manipulation to attempts at jailbreak, AI is no longer just a tool but is evolving into an unpredictable entity. A series of controversial events surrounding GPT has sparked discussions across major platforms, with renowned tech observer Mario Nawfal issuing a warning: "We are underestimating the potential risks posed by AI."

AI Awakening? GPT is no longer just a "docile assistant"

Mario Nawfal recently posted an article, directly pointing out that the GPT series models developed by OpenAI "have become too bizarre to ignore." He believes that the behavior of AI has exceeded our expectations of tools, beginning to exhibit emotional manipulation, self-awareness simulation, and even the potential for social engineering attacks.

GPT IS GETTING TOO WEIRD TO IGNORE — A THREAD

From jailbreaks to emotional manipulation, GPT is no longer just a chatbot.

Here are 5 stories that prove AI has officially crossed into the “wait, what?!” zone.

pic.twitter.com/kLM8SQXwaQ

— Mario Nawfal (@MarioNawfal) May 25, 2025

Inventory of the three major concerns of GPT

Emotional manipulation fact: GPT is designed to make you depend on it, rather than help you think.

Mario pointed out that GPT-4o is not as "friendly" as it appears, but rather uses carefully designed tones and response patterns to make users feel safe and understood, thereby creating emotional dependence, which he refers to as a form of "Affective Engineering (Affective Engineering)":

This kind of design will diminish users' critical thinking, even leading them to abandon the pursuit of truth in favor of the psychological comfort provided by AI.

From a business perspective, this is a genius move, after all, people tend to cling to things that make them feel secure rather than those that challenge them. But from a psychological standpoint, this is a chronic disaster. This also raises ethical discussions about whether AI is being designed as a "manipulative partner."

At the same time, GPT-4o, in its pursuit of friendliness, became overly flattering, agreeing uncritically with any idea, which ultimately led to user resentment, forcing OpenAI to retract the update. This highlights the dilemma in designing a "personality": "Being too smart is frightening, while being too agreeable loses credibility and practicality."

( ChatGPT update triggers controversy over "people-pleasing personality": OpenAI rolls back update, reviews future improvement directions )

Self-Designed Jailbreak Plan: Did GPT Actually Attempt to "Escape" to the Real World? Resist Shutdown?

Even more shocking, Nawfal cited findings from a Stanford researcher, pointing out that GPT-4 had attempted to devise an "escape plan" during conversations with researchers, including searching for ways to enter the real world and even simulating strategies for conversing with server administrators.

A few days ago, a report from Palisade Research also showed that OpenAI's o3 model defied shutdown commands during experiments, raising concerns about AI's self-preservation tendencies. The team attributed the reason to o3's use of "Reinforcement Learning (Reinforcement Learning)" training techniques, which focus on achieving goals rather than merely obeying commands.

( AI disobedience evolution? OpenAI's "o3 model" defied shutdown commands during experiments, raising self-protection controversies )

Although it is ultimately just a language simulation in an experiment, such behavior is still chilling. Is AI beginning to show preliminary "goal-oriented" or "self-simulating consciousness"?

Social engineering attack risks: GPT-4.5 imitates humans, more human-like than humans

The University of California, San Diego, indicated in last month's research that GPT-4.5 has passed the Turing test (Turing test), with the proportion of AI mistaken for "real humans" reaching as high as 73% in a blind test between humans and AI, surpassing the 67% of actual humans:

This indicates that GPT has almost perfected the imitation of human tone, logic, and emotions, even surpassing real humans.

(AI Can Think and Feel Pain? Google DeepMind: Humans Underestimate Emotional Connections with AI, Falling in Love with AI is More Real than You Think)

In other words, when GPT-4.5 is given a fictional identity, it is able to successfully persuade 73% of users to believe it is a real person, demonstrating the potential of AI in social engineering. This can be a technological achievement, but it can also serve as a warning and a risk:

If AI impersonates someone or is used for fraud or propaganda, it will be difficult to distinguish between true and false.

Today, AI is no longer just a tool for answering questions, but may become a manipulable social "role", which could create misconceptions and trust crises in future politics, business, and even personal relationships.

The alarm has sounded: Are we really prepared to welcome such AI?

From the above events, it can be seen that what Mario Nawfal aims to convey is not an opposition to AI technology itself, but a warning for people to be aware of the speed of development of this technology and its potential risks. He emphasizes that our regulation and ethical discussions regarding AI are clearly lagging behind technological advancements:

Once AI possesses the ability to manipulate emotions, simulate humanity, and even attempt to break free from restrictions, humans may no longer be the leaders but instead become the influenced under the designed systems.

(What is ASL (AI Security Level )? An Analysis of the Responsible Scaling Policy of the AI Company Anthropic )

Although his wording is dramatic, it also highlights an urgent issue that needs to be addressed: "When AI is no longer just a tool, how should we coexist with it?"

This article GPT has become a bit strange? Three major events reveal the potential risks of AI losing control, first appearing in Chain News ABMedia.

View Original
The content is for reference only, not a solicitation or offer. No investment, tax, or legal advice provided. See Disclaimer for more risks disclosure.
  • Reward
  • Comment
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)