TechTech newsTechnology

For Some Autistic People, ChatGPT Is a Lifeline

The chatbot’s flexibility also comes with some unaddressed problems. It can produce biased, unpredictable, and often fabricated answers, and is built in part on personal information scraped without permission, raising privacy concerns.

Goldkind advises that people turning to ChatGPT should be familiar with its terms of service, understand the basics of how it works (and how information shared in a chat may not stay private), and bear in mind its limitations, such as its tendency to fabricate information. Young said they have thought about turning on data privacy protections for ChatGPT, but also think their perspective as an autistic, trans, single parent could be beneficial data for the chatbot at large.

As for so many other people, autistic people can find knowledge and empowerment in conversation with ChatGPT. For some, the pros outweigh the cons.

Maxfield Sparrow, who is autistic and facilitates support groups for autistic and transgender people, has found ChatGPT helpful for developing new material. Many autistic people struggle with conventional icebreakers in group sessions, as the social games are designed largely for neurotypical people, Sparrow says. So they prompted the chatbot to come up with examples that work better for autistic people. After some back and forth, the chatbot spat out: “If you were weather, what kind of weather would you be?”

Sparrow says that’s the perfect opener for the group—succinct and related to the natural world, which Sparrow says a neurodivergent group can connect with. The chatbot has also become a source of comfort for when Sparrow is sick, and for other advice, like how to organize their morning routine to be more productive.

Chatbot therapy is a concept that dates back decades. The first chatbot, ELIZA, was a therapy bot. It came in the 1960s out of the MIT Artificial Intelligence Laboratory and was modeled on Rogerian therapy, in which a counselor restates what a client tells them, often in the form of a question. The program didn’t employ AI as we know it today, but through repetition and pattern matching, its scripted responses gave users the impression that they were talking to something that understood them. Despite being created with the intent to prove that computers could not replace humans, ELIZA enthralled some of its “patients,” who engaged in intense and extensive conversations with the program.

More recently, chatbots with AI-driven, scripted responses—similar to Apple’s Siri—have become widely available. Among the most popular is a chatbot designed to play the role of an actual therapist. Woebot is based on cognitive behavioral therapy practices, and saw a surge in demand throughout the pandemic as more people than ever sought out mental health services.

But because those apps are narrower in scope and deliver scripted responses, ChatGPT’s richer conversation can feel more effective for those trying to work out complex social issues.

Margaret Mitchell, chief ethics scientist at startup Hugging Face, which develops open source AI models, suggests people who face more complex issues or severe emotional distress should limit their use of chatbots. “It could lead down directions of discussion that are problematic or stimulate negative thinking,” she says. “The fact that we don’t have full control over what these systems can say is a big issue.”


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button