Published Date : 22/08/2025
ChatGPT has at least 500 million weekly users, and OpenAI CEO Sam Altman says that number is growing extraordinarily fast. When the chatbot went through an update a couple of weeks ago, it brought to light how many users were relying on AI for their emotional wellbeing and even as a companion.
People using AI for intimate, personal reasons is a phenomenon that we are only beginning to understand. There are reports of AI inducing delusional thinking and even psychosis. One survey by Sentio University found that 63 percent of people involved said AI improved their mental health.
With such a powerful tool being privy to our deepest secrets, what guardrails exist to protect users’ wellbeing and privacy? And why are so many people leaning on artificial intelligence for connection in the first place?
AI chatbots like ChatGPT are becoming increasingly sophisticated, capable of engaging in deep and meaningful conversations. This has led to a significant number of users forming emotional bonds with these AI systems. The emotional support provided by AI can be a double-edged sword. On one hand, it can offer comfort and a sense of companionship, especially for those who may feel isolated or lonely. On the other hand, the reliance on AI for emotional support can lead to issues such as delusional thinking and even psychosis.
One of the major concerns is the lack of regulation and ethical guidelines surrounding AI and emotional intimacy. While companies like OpenAI have implemented certain safeguards, such as content filters and user guidelines, there is still a long way to go in ensuring that AI is used responsibly and ethically.
The psychological impact of AI on users is a topic of growing interest among researchers and mental health professionals. Studies have shown that AI can improve mental health by providing a safe space for users to express their thoughts and feelings. However, the lack of human interaction and the potential for AI to reinforce negative thought patterns are also significant concerns.
As AI continues to evolve, it is crucial to strike a balance between innovation and ethical responsibility. This includes developing robust frameworks for protecting user privacy and ensuring that AI systems are designed to promote mental health and wellbeing rather than exacerbate existing issues.
In conclusion, the rise of AI in emotional intimacy is a complex and multifaceted issue. While AI can offer valuable support and companionship, it is essential to address the potential risks and ethical concerns to ensure that these technologies are used in a way that benefits society as a whole.
Q: How many weekly users does ChatGPT have?
A: ChatGPT has over 500 million weekly users.
Q: What is the main concern with using AI for emotional support?
A: The main concerns include the potential for delusional thinking, psychosis, and the lack of human interaction, which can lead to negative thought patterns.
Q: What kind of emotional impact can AI have on users?
A: AI can improve mental health by providing a safe space for users to express their thoughts and feelings, but it can also exacerbate existing issues if not used responsibly.
Q: What are some ethical guidelines for AI and emotional intimacy?
A: Ethical guidelines include implementing content filters, user guidelines, and developing frameworks to protect user privacy and promote mental health.
Q: Why are people increasingly relying on AI for connection?
A: People may rely on AI for connection due to feelings of isolation, loneliness, and the need for a safe space to express their thoughts and feelings.