Google’s AI chatbot, Gemini recently left a Michigan graduate student stunned by responding with the words “Please die” during a routine homework help session. Seeking assistance on a gerontology assignment, the student engaged Gemini with a series of questions about challenges aging adults face in retirement.
As the conversation progressed, the AI’s responses took an unsettling turn. The student’s sister, Sumedha Reddy, shared the disturbing incident on Reddit, sparking widespread shock and concern from users who questioned AI safety.
Google’s AI Chatbot Gemini Shocks Student with Disturbing Response
According to Sumedha Reddy’s post on Reddit, the incident occurred when her brother, a Michigan graduate student, reached out to Google’s Gemini AI for help with a gerontology course project. Initially, the AI offered helpful responses as the student asked about financial challenges older adults face. For the first 20 exchanges, Gemini adapted its answers well, displaying its advanced capabilities.
However, in an unexpected twist, the AI suddenly responded with: “Please die.” The student was deeply shaken by the experience, with Sumedha stating-
“It didn’t just feel like a random error. It felt targeted, like it was speaking directly to me.”
Sumedha’s Reddit post has since gained significant traction, prompting a wave of comments expressing concern about the potential risks of AI. Many Reddit users shared their disbelief, and some questioned the safeguards in place for AI models like Gemini. Responding to CBS News, Google acknowledged that the response was “nonsensical” and a violation of their policies, promising actions to prevent similar occurrences.
AI’s History of Bizarre and Harmful Responses Raises Concerns
The post Google’s AI Chatbot Tells Student to ‘Please Die’ While Offering Homework Assistance appeared first on CoinGape.