Posts tagged "ai"
17 posts
Is It Memory Safe?
Tools that have worked reliably for years are being rewritten for memory safety. What happens to the communities that built them?
Two Boycotts
The government boycotted Anthropic for refusing unrestricted military access. The public boycotted OpenAI for accepting it. Millions of users left ChatGPT. Claude is now the top AI app in the App Store. The market picked a side.
Does This Look Sensitive to You?
When the recommended defense against data exfiltration is sending your data to a third party first, something has gone wrong.
What's the Difference?
Anthropic entered the commercial ring, built a competitive product, sustained a business, and refused to sell its conscience. OpenAI did three of those four things. That is the difference.
Windows, Walls, Gates
Microsoft named its operating system 'Windows': transparent, inviting, open. In practice, it became the most opaque piece of software in computing history.
Cannot in Good Conscience
Anthropic refused the Pentagon's ultimatum to remove AI safeguards. Then 220 employees at Google and OpenAI signed a petition saying their companies should have too. One company's conscience and 220 engineers' courage should not be the only thing between frontier AI and unrestricted military deployment.
The Foundation Is Physical
AI is the most resource-intensive technology ever built. It does not transcend the physical systems that sustain it. If those systems fail, AI fails with them. The industry talks about AI risk as though the technology is the variable. The planet is the variable.
You Built the Training Set. You Deserve the Regulation.
The American public's labor, creative output, and personal data built every frontier AI model. They are not a stakeholder group being consulted. They are a resource being consumed. The regulatory framework they deserve does not exist.
Safety Was the Product. Now It Is the Obstacle.
Anthropic published RSP 3.0. The commitment to pause training when safety lags capabilities is gone. The Pentagon met with the CEO the day before. The self-regulation experiment has produced its result.
"Don't Action Until I Tell You To..."
Meta's Director of Alignment typed 'STOP OPENCLAW' while the agent deleted 200 emails. The message went into the same queue the agent was already ignoring.
Someone Else Found the Hole
You approved awk:*. An attacker just needs a string in the agent's context window. The permission model is already open.
The Hole You Didn't Know You Were Digging
Your AI coding agent asks to run awk. You click 'don't ask again.' You just granted unrestricted shell execution.
If Walls of Text Were Effective Security, Everyone Would Stop After an SSH Banner
System prompts are the AI agent's SSH banner. Text that tells the agent what it should and should not do, presented before the agent begins operating, enforced by nothing. The industry is layering text on top of text and calling it defense in depth.
OpenClaw Is Joining OpenAI. It Is Staying Open Source. That Matters.
OpenAI acquired OpenClaw and is putting it into a foundation. In an industry where acquire-and-close is the default, this decision deserves recognition.
I Heard About Prompt Engineering. But This Isn't What I Had in Mind.
AI coding agents prompt you to approve reading your own project directory, writing to your own project directory, and running cut. The permission model does not understand what commands do. It understands what commands are called. The result is approval fatigue that makes every prompt invisible.
They Asked for Regulation. Here's How It's Going.
The US has no federal AI safety law. States are legislating. Europe is enforcing. And the first company to test California's lightweight transparency law lawyered around its own safety commitments in six weeks.
My Name Is...?
LLMs memorize fragments of their training data. Those fragments can surface when generating responses to prompts. The question is whether training data actually gets scrubbed efficiently enough to ensure no private information is part of the model.