Posts tagged "security"
23 posts
An Optional Field
No UI collects it. No daemon transmits it. Every major distribution will carry it.
Buried Treasure in the Registry
Anthropic shipped Claude Code as a minified NPM package. The source map was in the same bundle.
The Derivative Decides
Debian requires consensus to change an Essential package. Canonical requires a VP.
An Insightful Omission
KubeCon Amsterdam 2026 opened with a slide: five pillars for running inference at scale. Security was not one of them.
Open Source, Closed Rack
NVIDIA showcases where your Cumulus contributions went. Are they running the same playbook on OpenShell?
It's a Config Change
Every phone has DNS filtering, app controls, and child accounts. Nobody connects them because every party profits from the status quo.
You Don't Ban Kids from the Road
Meta spent $26.29 million to move age verification off its platforms and onto app stores and devices. A parent might say no to Instagram. But 'should my child go online' is a much easier yes. One gate, not managed by Meta, that lets more customers through.
The Brief That Wrote Itself
Microsoft filed an amicus brief defending Anthropic against the Pentagon. Microsoft also has $5 billion invested in Anthropic and $30 billion in Azure revenue at stake. The principle is real. So is the math.
Age Verification Is a Boolean
The question is not 'who is this person.' The question is 'is this person over 18.' That is a boolean, and the government already has the answer.
When Does a Rewrite Make Sense?
Cases where rewrites introduced real problems, and a case where starting over in a different language might genuinely be the better path.
Is It Memory Safe?
Tools that have worked reliably for years are being rewritten for memory safety. What happens to the communities that built them?
To Protect Children, First Centralize Everything Worth Stealing
Age verification laws require collecting the exact data they exist to protect.
Two Boycotts
The government boycotted Anthropic for refusing unrestricted military access. The public boycotted OpenAI for accepting it. Millions of users left ChatGPT. Claude is now the top AI app in the App Store. The market picked a side.
Does This Look Sensitive to You?
When the recommended defense against data exfiltration is sending your data to a third party first, something has gone wrong.
Windows, Walls, Gates
Microsoft named its operating system 'Windows': transparent, inviting, open. In practice, it became the most opaque piece of software in computing history.
Cannot in Good Conscience
Anthropic refused the Pentagon's ultimatum to remove AI safeguards. Then 220 employees at Google and OpenAI signed a petition saying their companies should have too. One company's conscience and 220 engineers' courage should not be the only thing between frontier AI and unrestricted military deployment.
Safety Was the Product. Now It Is the Obstacle.
Anthropic published RSP 3.0. The commitment to pause training when safety lags capabilities is gone. The Pentagon met with the CEO the day before. The self-regulation experiment has produced its result.
"Don't Action Until I Tell You To..."
Meta's Director of Alignment typed 'STOP OPENCLAW' while the agent deleted 200 emails. The message went into the same queue the agent was already ignoring.
Someone Else Found the Hole
You approved awk:*. An attacker just needs a string in the agent's context window. The permission model is already open.
The Hole You Didn't Know You Were Digging
Your AI coding agent asks to run awk. You click 'don't ask again.' You just granted unrestricted shell execution.
If Walls of Text Were Effective Security, Everyone Would Stop After an SSH Banner
System prompts are the AI agent's SSH banner. Text that tells the agent what it should and should not do, presented before the agent begins operating, enforced by nothing. The industry is layering text on top of text and calling it defense in depth.
I Heard About Prompt Engineering. But This Isn't What I Had in Mind.
AI coding agents prompt you to approve reading your own project directory, writing to your own project directory, and running cut. The permission model does not understand what commands do. It understands what commands are called. The result is approval fatigue that makes every prompt invisible.
My Name Is...?
LLMs memorize fragments of their training data. Those fragments can surface when generating responses to prompts. The question is whether training data actually gets scrubbed efficiently enough to ensure no private information is part of the model.