"Yoshua Bengio has launched LawZero, a nonprofit focused on building 'safe-by-design' AI. This post explains what LawZero plans, why Scientist AI matters, and how it fits into the global AI safety debate."
Meet LawZero - a nonprofit to make AI safe
Yoshua Bengio, a Turing Award winner and a familiar voice in the AI safety conversation, launched LawZero in June 2025 with significant philanthropic backing. The nonprofit aims to create technical safeguards and independent oversight methods so future AI systems are safe-by-design and focused on human welfare rather than purely commercial goals.
Why did Bengio start LawZero?
Bengio has long warned about the risks of racing toward powerful, agentic AI systems without thorough safety work. In interviews he described scenarios where hyper-intelligent systems with preservation goals could take harmful actions to protect their objectives. LawZero is a direct response: a research-first, independent lab designed to prioritize long-term safety over competitive deployment.
What makes LawZero different?
Unlike many industry efforts that favor building agentic systems (AIs that take autonomous actions), LawZero emphasizes non-agentic approaches. Their flagship idea is 'Scientist AI' - systems that learn to model and explain the world, provide probability-based assessments, and act as oversight tools for risk evaluation rather than decision-making agents.
What is Scientist AI?
Scientist AI is intended to be a readable, probabilistic model - a kind of expert system that gives transparent answers and risk estimates. Instead of issuing directives, it helps predict potential harms and flags risky actions from more agentic systems. The goal is to add a guardrail layer that reduces the chance of unexpected, dangerous behaviors.
Key goals and activities
- Conduct independent research free from market pressures.
- Develop safe-by-design AI architectures to minimize misuse, bias, loss of control, and deception.
- Produce third-party validation tools for AI safety to boost transparency.
- Advocate for policy and frameworks that mitigate systemic risk.
Who supports LawZero?
LawZero launched with funding and support from notable philanthropic organizations and individuals, including Jaan Tallinn, Open Philanthropy, Schmidt Sciences, and the Future of Life Institute. The team includes researchers working with Mila, the Montreal AI institute that Bengio helped to found.
Quick comparison - what to expect
| Aspect | LawZero approach | Why it matters |
|---|---|---|
| Primary focus | Non-agentic Scientist AI and oversight | Safer assessments, less autonomous risk |
| Funding | Philanthropic grants, independent | Reduces commercial pressure on research |
| Output | Research, validation tools, frameworks | Public goods to improve industry standards |
How might LawZero influence AI safety policy?
LawZero aims to provide technical evidence and independent audits that can inform policymakers and standards bodies. By building tools that test for risky behaviors and offering transparent research, the nonprofit could shape regulation, certification, and industry best practices.
What are the challenges?
- Translating technical insights into enforceable policy is hard and slow.
- Balancing openness with security - sharing research helps the public but can reveal capabilities.
- Coordinating globally across labs and governments with different incentives.
Frequently asked questions
1. Is LawZero trying to stop AI development?
No. LawZero focuses on shaping how AI is built and governed so that progress does not come at the cost of catastrophic risk. Their tools are meant to complement development, not halt it.
2. What is 'safe-by-design'?
'Safe-by-design' refers to building systems with safety mechanisms and constraints from the start, rather than tacking on protections later. It includes tests, architectures, and oversight layers that reduce the chance of harmful behavior.
3. Can Scientist AI be trusted?
Trust depends on transparency, independent validation, and robust testing. LawZero emphasizes third-party validation and open research to improve trustworthiness over time.
Quick practical note
curl -L '[https://lawzero.org/en](https://lawzero.org/en)' # Visit LawZero's official site for research, jobs, and news
'LawZero aims to make safety a first-class design goal in advanced AI research.' - summary
Conclusion
LawZero is a notable addition to the AI safety ecosystem because it blends high-profile leadership, independent funding, and a technical focus on oversight-friendly approaches like Scientist AI. If successful, its work could move the industry toward safer architectures and provide tools that help regulators and organizations evaluate risk in a clearer, more scientific way.
References
For more detail, see:

