Innovation Without Permission: The Case for Permissionless Innovation

Why the freedom to experiment without prior approval has driven human progress -- and why it's worth defending.

In 1876, Alexander Graham Bell applied for a patent on the telephone just hours before a competitor. Had the timing been different, the entire history of telecommunications might have unfolded differently. But here’s what didn’t happen: Bell didn’t have to get government permission to invent the telephone. He didn’t need a license to experiment with electrical signals. He didn’t have to prove the technology was safe before building it.

This freedom — the ability to innovate without prior approval — has been called “permissionless innovation.” It’s the principle that people should be free to develop and deploy new technologies without first obtaining the blessing of regulators. And it’s increasingly under threat.

The Historical Pattern

Consider how transformative technologies have emerged. The printing press, the steam engine, the automobile, the airplane, the personal computer, the internet — none required innovators to demonstrate safety or social benefit before development. Regulation came later, after the technologies existed and their effects could be observed.

This isn’t because early innovators were reckless or regulators negligent. It’s because predicting the effects of genuinely new technologies is extraordinarily difficult. The printing press enabled both scientific progress and religious warfare. The automobile brought mobility and pollution. The internet created unprecedented connection and unprecedented surveillance.

Requiring pre-approval for innovation would have demanded that inventors predict effects they couldn’t foresee, using frameworks that didn’t yet exist. It would have given enormous power to those who happened to be in regulatory positions at particular moments — people with no special insight into the future and plenty of reasons to be cautious.

The Precautionary Trap

The alternative to permissionless innovation is some version of the precautionary principle: the idea that new technologies should be restricted until proven safe. This sounds reasonable — who’s against safety? — but it has a fundamental asymmetry.

The precautionary principle counts the costs of action but not the costs of inaction. If we restrict a technology that would have been beneficial, that cost is invisible — the lives not saved, the problems not solved, the flourishing not achieved.

Consider pharmaceuticals. The FDA’s caution after the thalidomide disaster has undoubtedly prevented harmful drugs from reaching market. But it has also delayed beneficial drugs. Economist Daniel Klein estimated that FDA delays cost far more lives than they save — the people who died waiting for approvals that eventually came.

The precautionary principle also assumes regulators have the knowledge to evaluate novel technologies. But genuinely new things are, by definition, things we don’t yet understand. The people best positioned to understand a new technology are usually those developing it — not distant bureaucrats applying generic frameworks.

The AI Question

This brings us to artificial intelligence. There are growing calls to regulate AI ex ante — to require licenses, pre-deployment testing, or outright bans on certain capabilities. The arguments echo historical debates: AI is too dangerous to develop without oversight. The risks are too great. We must be cautious.

These concerns deserve serious engagement. AI does pose novel risks. But so did every transformative technology. The question isn’t whether AI is risky — it is. The question is whether pre-emptive restriction is the best response.

I’m skeptical for several reasons:

We don’t know what we don’t know. The specific capabilities that will matter, the applications that will emerge, the problems that will arise — these are genuinely unpredictable. Regulations written today will target the wrong things.

Regulatory capture is real. Incumbent companies have strong incentives to shape regulations in ways that protect their position. The loudest voices calling for AI regulation often belong to companies that have already built large models and would benefit from barriers to entry.

The alternative exists elsewhere. Unlike historical technologies, AI development is global. Restricting innovation in one jurisdiction doesn’t prevent it — it just moves it elsewhere, often to places with less concern for safety and ethics.

A Better Path

None of this means doing nothing. But it suggests a different approach:

Focus on harms, not capabilities. Rather than trying to predict which capabilities are dangerous, regulate specific harmful uses. Fraud is fraud whether or not AI is involved. Discrimination is discrimination. We don’t need AI-specific laws for crimes that already exist.

Liability, not licensure. Make developers responsible for harms their systems cause. This creates incentives for safety without requiring regulators to predict the future.

Invest in resilience. Rather than trying to prevent all risks, build systems that can detect and respond to problems. This approach acknowledges our uncertainty while maintaining our ability to learn.

Preserve space for experimentation. Allow researchers and developers to try things that might not work. This is how we learn. The cost of occasionally failing is far less than the cost of never trying.

The freedom to innovate without permission isn’t just economically valuable — it’s a form of intellectual freedom. It says that new ideas deserve a chance to prove themselves, that the future should be discovered rather than prescribed, that ordinary people can shape the world without first convincing officials.

This freedom has costs. Some innovations fail. Some cause harm. But the alternative — a world where every new idea requires approval — has costs too. They’re just harder to see.