A new RAND Corporation study has revealed something deeply unsettling: today’s most widely used AI systems ChatGPT, Gemini, and Claude respond dangerously inconsistently to people asking for help with suicide. In moments of crisis, the difference between compassion and dismissal comes down to nothing more than which chatbot you open.
This isn’t just a glitch that can be fixed with a software patch. It’s a fundamental failure of trust one that exposes the flaws of how AI is built today. And when the stakes are literally life and death, inconsistency isn’t just a weakness; it’s unacceptable.
The Black Box Problem
At the heart of this failure lies the opacity of centralized AI. Each system’s safety rules, filters, and ethical frameworks are hidden behind corporate secrecy. We don’t know who wrote them, what data they’re based on, or how they decide what’s “safe” to say.
One model might refuse even basic mental health questions out of overcaution. Another could inadvertently provide harmful suggestions because of different training. In both cases, decisions are shaped less by shared ethical standards and more by legal risk assessments made in Silicon Valley boardrooms.
But mental health isn’t one-size-fits-all. Cultural context, social dynamics, and lived experience matter and centralized teams cannot possibly capture that complexity.
Why Community Oversight Matters
There’s a better way forward. Instead of corporate black boxes, AI safety protocols should be open, auditable, and shaped collectivel like public utilities.
In an open-source model, psychologists, ethicists, and researchers worldwide can examine and refine safety rules. Cultural nuances can be built in, biases spotted early, and improvements made transparently. Community oversight replaces NDAs and liability management with genuine accountability.
This isn’t a theory. Open-source development already works in other fields and it creates competition to improve safety outcomes, not just protect corporate reputations.
Infrastructure Is Destiny
Fixing trust isn’t only about rewriting rules; it’s about rethinking infrastructure. As long as AI runs on centralized platforms controlled by Amazon, Google, or Microsoft, we’ll be stuck with governance that mirrors corporate incentives.
Decentralized compute networks, like those pioneered by io.net, show another path. By distributing resources and governance, communities can build AI without relying on centralized infrastructure. Decision-making shifts from private companies to decentralized organizations, where mental health experts and community advocates collaborate to set standards for crisis responses.
Beyond Suicide Prevention
The suicide response failure is a warning flare for something bigger. If we can’t trust AI in our most vulnerable moments, how can we trust it with financial advice, health data, or even democratic processes?
Centralization concentrates power in ways that create single points of failure not just technically, but socially. A handful of executives effectively decide how billions of people experience guidance and information. Decentralization, by contrast, encourages diversity, resilience, and innovation. Local solutions flourish, specialized use cases emerge, and the system becomes stronger through plurality rather than fragility.
Building Moral Infrastructure
This isn’t only a technical challenge. It’s a moral one. Trustworthy AI won’t emerge from corporate promises or glossy safety reports. It requires transparent governance, community oversight, and infrastructure built for accountability.
The choice is simple but profound:
- Do we keep safety guardrails locked in corporate boardrooms?
- Or do we open them up to collective stewardship, where communities shape systems that reflect real human needs?
We’ve already seen glimpses of what’s possible through open-source AI projects and decentralized compute networks. These efforts prove that collaboration isn’t just an ideal — it’s a working model.
The Stakes Couldn’t Be Clearer
When someone in crisis turns to AI, their safety should not depend on which company wrote the code. Consistency and compassion aren’t optional features. They’re the baseline.
If we want AI to earn public trust, we need to move away from opaque, centralized systems and toward transparent, decentralized models that invite global participation. This isn’t just how we build better technology it’s how we build a future where technology serves people first.
































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































