Windows, Walls, Gates

Microsoft named its operating system “Windows.” Transparent. Inviting. Open.

In practice, Windows became the most opaque piece of software in computing history. Closed source. Proprietary kernel. Licensing agreements longer than the codebases they governed. Vendor lock-in so deep that entire governments spent decades trying to extract themselves.

They should have called it Walls.

Or, if we’re being honest about who benefited most: Gates.


The naming problem is the trust problem#

“Windows” promised transparency. What it delivered was a black box that ran the world’s critical infrastructure for thirty years while nobody outside Redmond could confirm what it was actually doing.

Hospitals ran Windows. Power grids ran Windows. Financial systems, air traffic control, military logistics: Windows. Every one of those deployments was a trust decision made without the ability to verify.

The question was never “does Windows work?” It worked. The question was “can you prove it does only what it claims to do?” For closed-source software, the answer is always the same: no. You are trusting the vendor. You are trusting their incentives. You are trusting that their business objectives and your security requirements will never diverge.

They always diverge. Ask the hospitals that got hit by WannaCry. Ask anyone who watched CrowdStrike brick 8.5 million machines because a vendor update to a kernel-level agent went out without the scrutiny that access to source would have enabled.


Source-available is the honest model#

Critical infrastructure is not consumer software. When your DNS fails, your organization goes dark. When your certificate authority misbehaves, every TLS connection in your fleet becomes suspect. For systems where failure cascades, “trust us” is not an engineering position. It is a marketing position. Engineering requires evidence. Evidence requires access. Access requires source code.

Source-available gives you that access. You can read the code. You can audit it. You can verify the system does what it claims. You cannot fork it and rebrand it as your own product, but you can confirm the software running your production environment is not doing something it should not be doing. That is the transparency that matters.

Here is the part the open-source purists do not like to say out loud: code has to be maintained, and maintenance costs money. Open-source projects that live under foundations and review boards still depend on funding to keep the lights on. When that funding comes from corporate sponsors, the incentives follow the cash, not the community. You can call it a democracy, but when the board’s decisions track the interests of whoever is writing the largest check, the openness is procedural, not structural. The license is open. The direction is bought.

Source-available solves this honestly. A company builds the software, sells it, and sustains it with revenue. The business model is visible. The incentives are legible. You know exactly why the vendor is doing what they are doing: because customers are paying for it. That is not a conflict of interest. That is alignment. The vendor succeeds when the software works. The code is right there if you want to verify that it does.

The alternative, open-source software that depends on donations, grants, or the goodwill of corporations with their own agendas, is how you get critical libraries maintained by one volunteer, or governance captured by the largest contributor. Open is not automatically sustainable, and unsustainable software is a liability regardless of its license.


The tools that won, and why#

The infrastructure ecosystem has largely accepted this. Linux, Docker, Kubernetes, Terraform, Ansible, Vault, Envoy, Prometheus, Grafana, PostgreSQL, Elasticsearch, Kafka, Redis, HAProxy, VyOS, Suricata. Some fully open source, some source-available. All of them won because operators could read them. Not because operators always read them, but because when something broke at 3am, the answer was in the source, not in a support ticket queue.

The tools that lost share a common thread: opacity at the wrong layer. Proprietary monitoring agents that phone home to endpoints you cannot inspect. Closed-source security tools that ask for root access with no way to verify what they do with it. Infrastructure platforms that lock your configuration into formats only their product can read.

Every one of these is a wall dressed up as a window.


AI makes this harder, not easier#

We spent thirty years learning that closed-source operating systems were a liability for critical infrastructure. Now AI is entering the stack, and the transparency problem gets worse, not better.

Models are opaque by nature. You cannot read the weights the way you read source code. That is not going to change soon, and pretending otherwise is not useful.

Worse: models are non-deterministic. Traditional software can be tested; given the same input, it produces the same output, and you can write assertions against that. A language model given the same prompt twice may produce two different Terraform plans, two different security policies, two different answers about whether a change is safe. The verification techniques we built for deterministic software do not transfer.

But opacity in the model is exactly why everything around the model has to be auditable. The code that decides what the model can touch, what credentials it holds, what actions it can take, what evidence it produces: that layer is software, and software can be open. If you cannot inspect the brain, you had better be able to inspect the hands.

This is the lesson Windows already taught us, applied one layer up. The model is a dependency you cannot fully audit. The infrastructure that governs what that dependency is allowed to do is something you can audit, and should.


What follows from this#

The same standard applies everywhere. If a model is generating Terraform plans, writing security policies, or executing runbooks, the system deciding what it is allowed to touch, the orchestration around it, the governance layer, the policy engine, the audit trail: all of it should be readable. Open-weight models are preferable where they exist. Where they do not, the surrounding infrastructure has to compensate.

We do not get to spend thirty years learning that closed-source infrastructure is a liability and then hand AI a pass because the technology is new and the capability gap is exciting. The capability gap was exciting in 1995 too. That is how we ended up with Windows on every hospital workstation and no way to verify what it was doing.


The gamers got there first#

Here is the part nobody predicted: it was not governments, enterprises, or the open-source faithful who finally cracked Windows’ consumer monopoly. It was gamers.

For decades, the standard excuse was “but I need Windows for games.” It was the last mainstream consumer moat. Browsers went cross-platform, office suites moved to the cloud, dev tooling went cross-platform. But gaming kept hundreds of millions of users locked in.

Then Valve shipped Proton. A compatibility layer that runs Windows games on Linux, no porting required. Steam Deck put a Linux handheld in millions of hands, and most of those users had no idea they were running Linux. They did not care. They cared that it worked. Lutris, Wine, DXVK, and a sprawling community of contributors filled in the gaps, title by title, anti-cheat by anti-cheat, until “Linux gaming” stopped being a punchline and became a platform.

The gaming community did not wait for permission. They did not lobby Microsoft for source access. They built around the wall. They reverse-engineered the APIs, wrote compatibility layers, tested thousands of titles, filed bugs, and shipped fixes. They turned the last barrier to leaving Windows into a solved problem through sheer, stubborn, open-source effort.

Infrastructure has the same opportunity. The tools that keep teams locked into proprietary platforms are not technically superior. They are familiar. They are entrenched. They benefit from the same inertia that kept gamers on Windows for twenty years. But inertia erodes when something better shows up and actually works.

The gamers proved that a motivated open-source community can dismantle a monopoly’s last stronghold without the monopoly’s cooperation. Infrastructure engineers should be taking notes.


The punchline writes itself#

Bill Gates built the most successful walled garden in computing history and named it after the one thing it was not. The product was not a window. It was a gate: controlled entry, controlled exit, controlled everything in between.

Thirty years later, the infrastructure industry is still learning the same lesson: when someone offers you a window into a system you cannot inspect, check whether it is actually a wall. And check whose name is on the gate.


Greg Herbster is the founder of ControlPlane Labs, building open-source infrastructure control planes for DevOps and platform engineering teams. CPLabs is bootstrapped, solo-founded, and committed to the principle that infrastructure you cannot read is infrastructure you cannot trust.