Published Date : 18/07/2025
A criticism about AI safety from an OpenAI researcher aimed at a rival opened a window into the industry’s struggle: a battle against itself. It started with a warning from Boaz Barak, a Harvard professor currently on leave and working on safety at OpenAI. He called the launch of xAI’s Grok model “completely irresponsible,” not because of its headline-grabbing antics, but because of what was missing: a public system card, detailed safety evaluations, the basic artifacts of transparency that have become the fragile norm.
It was a clear and necessary call. But a candid reflection, posted just three weeks after he left the company, from ex-OpenAI engineer Calvin French-Owen, shows us the other half of the story. French-Owen’s account suggests a large number of people at OpenAI are indeed working on safety, focusing on very real threats like hate speech, bio-weapons, and self-harm. Yet, he delivers the insight: “Most of the work which is done isn’t published,” he wrote, adding that OpenAI “really should do more to get it out there.”
Here, the simple narrative of a good actor scolding a bad one collapses. In its place, we see the real, industry-wide dilemma laid bare. The whole AI industry is caught in the ‘Safety-Velocity Paradox,’ a deep, structural conflict between the need to move at breakneck speed to compete and the moral need to move with caution to keep us safe.
French-Owen suggests that OpenAI is in a state of controlled chaos, having tripled its headcount to over 3,000 in a single year, where “everything breaks when you scale that quickly.” This chaotic energy is channeled by the immense pressure of a “three-horse race” to AGI against Google and Anthropic. The result is a culture of incredible speed, but also one of secrecy.
Consider the creation of Codex, OpenAI’s coding agent. French-Owen calls the project a “mad-dash sprint,” where a small team built a revolutionary product from scratch in just seven weeks. This is a textbook example of velocity; describing working until midnight most nights and even through weekends to make it happen. This is the human cost of that velocity. In an environment moving this fast, is it any wonder that the slow, methodical work of publishing AI safety research feels like a distraction from the race?
This paradox isn’t born of malice, but of a set of powerful, interlocking forces. There is the obvious competitive pressure to be first. There is also the cultural DNA of these labs, which began as loose groups of “scientists and tinkerers” and value-shifting breakthroughs over methodical processes. And there is a simple problem of measurement: it is easy to quantify speed and performance, but exceptionally difficult to quantify a disaster that was successfully prevented.
In the boardrooms of today, the visible metrics of velocity will almost always shout louder than the invisible successes of safety. However, to move forward, it cannot be about pointing fingers—it must be about changing the fundamental rules of the game. We need to redefine what it means to ship a product, making the publication of a safety case as integral as the code itself. We need industry-wide standards that prevent any single company from being competitively punished for its diligence, turning safety from a feature into a shared, non-negotiable foundation.
However, most of all, we need to cultivate a culture within AI labs where every engineer—not just the safety department—feels a sense of responsibility. The race to create AGI is not about who gets there first; it is about how we arrive. The true winner will not be the company that is merely the fastest, but the one that proves to a watching world that ambition and responsibility can, and must, move forward together.
Q: What is the Safety-Velocity Paradox in the AI industry?
A: The Safety-Velocity Paradox is a deep, structural conflict in the AI industry between the need to move at breakneck speed to compete and the moral need to move with caution to ensure safety.
Q: Who is Boaz Barak, and what was his criticism?
A: Boaz Barak is a Harvard professor currently on leave and working on safety at OpenAI. He criticized the launch of xAI’s Grok model as ‘completely irresponsible’ due to the lack of transparency and safety evaluations.
Q: What did Calvin French-Owen reveal about OpenAI?
A: Calvin French-Owen, a former OpenAI engineer, revealed that while many at OpenAI are working on safety, much of this work is not published. He also described the company’s rapid growth and the intense pressure to move quickly.
Q: What is the human cost of the velocity in AI development?
A: The human cost of the velocity in AI development includes long working hours, often until midnight and through weekends, which can lead to burnout and a lack of focus on publishing safety research.
Q: How can the AI industry address the Safety-Velocity Paradox?
A: The AI industry can address the Safety-Velocity Paradox by redefining product shipping to include safety cases, establishing industry-wide standards, and fostering a culture of responsibility among all engineers.