Ai chatbots are sycophants. They will say literally anything if you convince it. You can get them to tell you to kill yourself or others or anything at all. They are only dangerous if you believe them. Unfortunately that's going to be a huge problem.
I really, really don't get the character.ai thing at all, but is this really pro-anorexia? I have seen the fat acceptance crowd label anything remotely resembling calorie counting to be pro-eating-disorder