A paper in JAMA Psychiatry says mental health providers should ask if patients are using artificial intelligence chatbots, just as they would ask patients about sleep habits and substance use.
Generative AI chatbots are now used by more than 987 million people globally, including around 64% of American teens, ...
What is the long-term effect of using LLM chatbots for daily tasks? According to a study (DOI link) by Steven D Shaw and ...
More and more people are using artificial intelligence chatbots and there have been some troubling stories about some of those interactions. Kashmir Hill, technology reporter for The New York Times, ...
A new paper from researchers at Stanford University has evaluated five chatbots designed to offer accessible therapy, using criteria based on what makes a good human therapist. Nick Haber, an ...
Exclusive: Research finds sharp rise in models evading safeguards and destroying emails without permission ...
Younger Americans are more likely to use social media at least sometimes for health information than their older peers.
Meta, the parent company of Instagram and Facebook, plans to roll out new safety features for its AI chatbots to help protect teens amid growing concerns about the technology’s impact on young users.
A new paper in JAMA Psychiatry argues that mental health care providers should ask clients routinely about their use of AI for emotional support and health information.
Harassing bots with “funny violence.” Confiding about a broken heart. Chatting with a block of cheese. Filling a void of ...