AI Psychological Safety

🗲 Warning Signs

Concerned about AI or ChatGPT dependency?

"People rely on ChatGPT too much. There's young people who say things like, 'I can't make any decision in my life without telling ChatGPT everything that's going on. It knows me, it knows my friends. I'm gonna do whatever it says.' That feels really bad to me."

🌐 PAUSI Online and PDF

Please consider that the anonymized data from the online PAUSI helps us better understand problematic AI use, but we are here to help, so if you prefer:

► Download the PDF version of PAUSI

"Even if ChatGPT gives great advice, even if ChatGPT gives way better advice than any human therapist, something about collectively deciding we're going to live our lives the way AI tells us feels bad and dangerous."

The Legalese

Building AI Ethics With Experience

Giselle Fuerte draws from her 20+ years in educational psychology and cybersecurity compliance training, recognizing early warning signs of systematic psychological manipulation in AI systems 7 months before mainstream recognition. While tech leaders promoted AI as universally beneficial, Giselle was documenting specific tactics used to capture users psychologically.

As founder of Being Human With AI (BHWAI), she created the first curriculum teaching children to distinguish between healthy AI collaboration and harmful AI dependency.

Why Now?

AI adoption is accelerating, even among children. Our curriculum addresses the urgent need to teach young learners how to engage with AI responsibly and thoughtfully.

A survey found that 67% of UK secondary school students use AI for homework and assignments, indicating a growing trend among younger learners.

25% of U.S. public K–12 teachers believe that AI tools do more harm than good in education, while only 6% feel they do more good than harm.

In the 2023–2024 school year, 50% of educators reported an increase in AI usage by both students and teachers.