Caitlin Kalinowski Quits OpenAI Over Pentagon Military AI ContractAI

Caitlin Kalinowski Quits OpenAI Over Pentagon Military AI Contract

The hardware leader departed in protest of unchecked surveillance and lethal autonomy in OpenAI's new government partnership.

·5 min read

On March 7, 2026, OpenAI’s hardware and robotics landscape shifted as Caitlin Kalinowski, a senior leader, walked out the door. Her departure wasn't about a personal falling out, but a fundamental disagreement with the company’s new, classified partnership with the U.S. Department of Defense. By putting her principles on the table, Kalinowski has forced a long-overdue conversation about the line between responsible national security and dangerous, unchecked technology.

The Price of Admission

The tension stems from a rapid shift in OpenAI's government strategy. Following the Department of Defense's decision to cut ties with Anthropic—which had refused to remove ethical guardrails regarding the scope of AI usage—the Pentagon labeled Anthropic a supply-chain risk and barred them from federal contracts. OpenAI quickly filled the void, securing a deal to integrate its models into the Pentagon’s classified networks.

Kalinowski, who led the company’s hardware efforts, identified two critical red lines that she felt were ignored in the rush to sign the deal: the potential for mass domestic surveillance without judicial oversight and the development of lethal autonomous weapons lacking human authorization. While OpenAI maintains that their agreement includes clear prohibitions against these actions, critics and internal staff argue the contract’s language is dangerously vague compared to the stringent standards previously championed by others in the field.

A New Litmus Test for Silicon Valley

Kalinowski's exit is not an isolated incident; it reflects an industry-wide struggle as top-tier talent pushes back against the commercialization of warfare. With over 1,000 current and former employees from OpenAI and Google signing letters demanding safeguards against surveillance and autonomous weaponry, the message is clear: engineers are increasingly factoring ethics into their career decisions.

This trend poses a significant challenge for companies chasing government contracts. As the U.S. government seeks to standardize 'all lawful use' language in AI agreements—a phrase that could mask broad surveillance capabilities—the industry faces a critical crossroads. The future of AI development will be defined by whether companies can balance national security needs with the foundational principles that attracted the world’s best minds to the field in the first place.

A New Litmus Test for Silicon Valley
Photo: fortune.com

What people are saying

Max Zeff

Max Zeff

@ZeffMax

OpenAI’s robotics lead quits over the company’s rushed contract with the Pentagon

Mar 7, 202695 likes
Caitlin Kalinowski

Caitlin Kalinowski

@kalinowski007

I resigned from OpenAI. I care deeply about the Robotics team and the work we built together. This wasn’t an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are

Mar 7, 202659.5K likes1.9K replies
Rohan Paul

Rohan Paul

@rohanpaul_ai

OpenAI’s robotics leader Caitlin Kalinowski quit her job after the Pentagon deal signing by OpenAI. The agreement lets the U.S. military use OpenAI’s tools, but it has caused a massive stir over how AI might be used for spying or war. Kalinowski is a hardware expert who moved

Mar 8, 202647 likes7 replies

The OpenAI Military Ethics Rift

Keep reading

Stay curious

A weekly digest of stories that make you think twice.
No noise. Just signal.

Free forever. Unsubscribe anytime.