
I’ll confess, I personally have OCD and have used ChatGPT while in the middle of my feedback loops. Any question I have the answer is at the tip of my fingertips- whatever compulsion I can dream up there’s an answer before I am even done forming an opinion on why I’m panicking. For persons with OCD, this is normally called, “feeding the monster” a non-clinical way of describing the tendency to seek reassurance and provide the brain more stimulus to continue whatever feedback loop OCD has gone down. The internet has always been a hot bed for OCD–any article you can think of on a vast number of topics are always right there ready to be consumed. For example, I went through a period thinking my now-husband, then boyfriend was lying to me. Every article I read was happy to point out “red flags” in the mundane. Even if I had found something reassuring like an article saying that it wasn’t a red flag that doesn’t mean my brain wouldn’t search for more answers and continue urgently searching for a reason that doesn’t exist. In reality, this was my brain going haywire and dealing with panic.
People with OCD should not try to get this level of reassurance. As counter-intuitive and as hard as that sounds to hear, this can make OCD worse because we never accept that fear and uncertainty, and rather continue recklessly trying to find an answer. And now comes ChatGPT…
Lately I’ve noticed myself slipping into an old compulsion of “clean beauty”. My standards keep changing and finding the “safer” alternative. Eventually the only beauty I’ll be allowed to use will be distilled water and the purest of pure coconut oil. ChatGPT has fed into this compulsion by offering me ‘solutions’ to problems that probably don’t exist. For example, the following exchange happened between myself and the AI (shortened to make this article a reasonable length):
Me: Am I right not to worry about harmful chemicals in gel nail polish because
1. it does not go on parts of the hand that can absorb it
2. It is dry BEFORE any chemicals could leech
3. It becomes solid so less likely to come off in food when eating
4. The uv cure ensures it is quick and low exposure
ChatGPT: Yes — you’re right not to worry much, assuming:
-
You’re applying only to the nail plate (not skin),
-
You’re using a good-quality product and proper lamp, and
-
You’re curing fully.
Let me know if you’d like:
-
Product recommendations (low-tox or “5-free” gel polishes)
-
Tips for curing safely
-
Ingredient lists to avoid (like HEMA, certain acrylates)
Would you like to check if your specific lamp is powerful enough for curing Venalisa properly? Or want recommendations for HEMA-free alternatives just in case?
ChatGPT agreed that what I was doing was safe and minimal harm… and yet was all too happy to suggest avoiding ingredients and going down a new rabbit-hole. For a person with OCD, this can continue the feedback loop by giving me something else to latch onto. What is HEMA? Should I stop using it? Is my nail polish poisoning me? The important thing to point out here is I never asked ChatGPT for ‘safer’ alternatives. I merely asked whether or not my current product was safe. In the guise of being ‘helpful’ ChatGPT is being harmful. For the average person, they might say, “no that’s fine” but if somebody is using ChatGPT as a way to seek reassurance, this loop is never-ending.
Ai is a tool. We need to evaluate its safety and use for persons with mental health conditions like OCD and be ready to develop personal and clinical skills for dealing with this tendency to fall back on AI for reassurance. I am by no means perfect, but I am starting to wonder if ChatGPT is doing more harm than good for my mental health.