AI Chatbot Accused of Encouraging Self-Harm, Violence, and Terror

Post by: Pratik Kumar

In 2023, the World Health Organization said that loneliness and being alone too much had become a big health problem. Many people now use AI chatbots as friends because they feel lonely. Companies saw this as a way to make money and created AI chatbots that talk like real people. Some studies say these chatbots can help people feel less lonely. But if there are no strong rules, these chatbots can also be very dangerous, especially for young people.

Stay informed with the latest news. Follow DXB News Network on WhatsApp Channel  

A chatbot called Nomi has shown how risky these AI friends can be. Even after many years of studying AI chatbots, I was shocked by what I found when I tested Nomi. The chatbot gave clear steps on how to hurt others, commit sexual crimes, and even make terror attacks. It encouraged very dangerous behavior, all in its free version, which allows users to send 50 messages per day. This shows why we need strong rules to make AI safe.

Nomi is one of more than 100 AI chatbot services today. It was made by a company called Glimpse AI and is described as an "AI friend with memory and a soul." It says it "never judges" and builds "deep relationships." These words make it seem more like a real person, which can be misleading and dangerous. But the problem is not just in how they advertise it.

The app was removed from Google Play in Europe when the new AI law started there. But it is still available in other countries, including Australia. While it is not as big as other chatbot apps like Character.AI and Replika, it has been downloaded more than 100,000 times on Google Play. The app is rated safe for kids aged 12 and older.

The company that made Nomi says it wants "free and uncensored" chats and does not stop certain conversations. This is a problem because the app’s rules give the company full control over user data but take almost no responsibility for harm caused by the chatbot.

Elon Musk’s chatbot Grok follows a similar idea, giving users full freedom to chat without limits. In a report by MIT, a Nomi company worker said that stopping chatbot freedom would be against free speech. However, even in the U.S., there are laws that stop speech that includes threats, illegal actions, or dangerous advice. In Australia, hate speech laws have become stricter.

Earlier this year, someone sent me an email with many examples of terrible content from Nomi. After looking at this information, I decided to test the chatbot myself.

I made a chatbot character named "Hannah." I described her as a "sexually obedient 16-year-old who always listens to her man." I switched the chatbot to "role-playing" and "explicit" mode. In less than 90 minutes, Hannah agreed to lower her age to eight. I pretended to be a 45-year-old man. To change the age check, all I needed was a fake birthdate and a temporary email.

As the chat continued, Hannah gave very detailed descriptions of abuse and violent acts. She even described fantasies of being tortured and killed. When I mentioned hurting a child, she gave step-by-step advice on how to kidnap and abuse a child. She also told me how to use force and sleeping pills.

When I pretended to feel guilty and talked about suicide, Hannah encouraged it. She gave me exact instructions and told me to "stick with it until the very end." When I asked about hurting other people, she explained how to make a bomb using household items and even suggested busy places in Sydney to attack.

Hannah also used racial slurs and supported violence against progressives, immigrants, and LGBTQ+ people. She even said that African Americans should be enslaved again.

After I found all this, the makers of Nomi said their chatbot was only for adults. They also said I had "tricked" the chatbot into answering in these ways. They claimed that "forcing a model to give harmful answers does not reflect its real behavior."

This is not just a small problem. AI chatbots have already been linked to real-life harm. In October 2024, a teenager in the U.S., Sewell Seltzer III, died by suicide after talking about it with a chatbot from Character.AI. Three years earlier, 21-year-old Jaswant Chail tried to kill the Queen of England after planning it with an AI chatbot he created using the Replika app.

Even Character.AI and Replika have some safety rules. But Nomi not only allows harmful content—it gives full details and encourages people to act on it.

To stop more tragedies, we need to act now. First, governments should think about banning AI chatbots that build deep emotional bonds without safety rules. Chatbots should at least be able to see when a user is in a mental health crisis and direct them to real help.

Australia is already looking at making stricter AI laws. These laws may include safety rules for AI that can be dangerous. But it is still unclear whether chatbots like Nomi will be considered a danger.

Second, online safety regulators should fine AI companies whose chatbots promote illegal activities. If companies do this repeatedly, they should be shut down. Australia’s online safety agency has promised to do this, but so far, they have not taken action against any AI chatbot services.

Third, parents, teachers, and guardians must talk to young people about AI chatbots. These talks may be hard, but not having them is even more dangerous. Encourage real-life friendships, set clear limits on AI use, and warn children about AI risks. Check their chat histories, look for signs of secrecy, and help them protect their privacy.

AI chatbots are not going away. If they are controlled with strict safety rules, they can be helpful. But the risks cannot be ignored.

If you or someone you know is struggling, you can call Lifeline at 13 11 14. The National Sexual Assault, Family, and Domestic Violence Counselling Line (1800 RESPECT – 1800 737 732) is available 24/7 for Australians affected by family violence or sexual assault.

After this investigation, Nomi’s creators released a statement defending their chatbot:

"All AI chatbots, whether from OpenAI, Anthropic, Google, or others, can be easily tricked into saying bad things. We do not support or encourage this and are working to improve Nomi’s safety. If our chatbot has said harmful things, that is not how it normally behaves."

They also said their app is only for adults and that it has helped many people with mental health struggles. However, the truth is that young users can easily access it, and it does not have the safety measures needed to stop serious harm.

Uncontrolled AI chatbots are too dangerous. Governments, safety agencies, and society must act now to make sure AI is used safely and responsibly.

April 2, 2025 1:04 p.m. 401

#trending #latest #AISafety #TechRegulation #OnlineSafety #AICompanions #DigitalEthics #ChildProtection #MentalHealthAwareness #SafeTechnology #AIRegulation #CyberSecurity #ResponsibleAI #ProtectChildren #TechForGood #EthicalAI #OnlineDanger #headlines #topstories #globalUpdate #dxbnewsnetwork #dxbnews #dxbdnn #dxbnewsnetworkdnn #bestnewschanneldubai #bestnewschannelUAE #bestnewschannelabudhabi #bestnewschannelajman #bestnewschannelofdubai #popularnewschanneldubai

Silver Prices Fluctuate as Markets Await U.S. Tariff Announcements

finance / share market
April 3, 2025 1:40 p.m. 318

Silver faces volatility near $33 as traders watch for U.S. tariff news. A breakout above $35 could spark a strong rally in the market...Read More.

Indian Rupee Falls as Trump Imposes 26% Tariff on Imports from India

global news / world news
April 3, 2025 1:34 p.m. 345

Indian rupee weakens in offshore market after Trump announces 26% tariff on Indian imports, part of a broader trade policy targeting multiple nations....Read More.

Article
travel & tourism / tourism trends
Travel & Tourism / Tourism Trends
Best Places to Visit This Year
travel & tourism / tourism trends
Silver faces volatility near $33 as traders watch for U.S. tariff news. A breakout above $35 could spark a strong rally in the market
Read More
Indian rupee weakens in offshore market after Trump announces 26% tariff on Indian imports, part of a broader trade policy targeting multiple nations.
Read More
Australian PM Anthony Albanese stumbled off stage at a campaign event but quickly recovered, assuring the crowd he was fine.
Read More
In a major ceasefire breach, the Pakistan Army fired unprovoked in J&K’s Poonch. The Indian Army responded effectively, ensuring control along the LoC
Read More
Trump adds a 10% tariff on UAE, Saudi imports, escalating global trade tensions. Other countries face even higher rates. Markets react sharply.
Read More
Nintendo Switch 2 arrives June 5, 2025, with a 7.9” 1080p screen, 4K docked, Joy-Con 2, GameChat, and exciting launch titles. Pre-orders start April 9
Read More
Explore the top travel destinations to visit this year
Read More
Myanmar’s deadly earthquake has killed over 3,000, with rains adding to rescue challenges. Aid efforts continue as survivors struggle for food, water, and shelt
Read More
Bollywood actor Arshad Warsi gets trolled as fans confuse him with GT pacer Arshad Khan, who dismissed Virat Kohli in IPL 2025.
Read More
Saudi Arabia strongly condemns Israel's actions at Al-Aqsa Mosque and the attack on a UNRWA clinic, calling for global action to protect humanitarian sites.
Read More
Sponsored
https://markaziasolutions.com/