We Have Gone Nowhere And Done Nothing
We Have Gone Nowhere And Done Nothing
I was falling down a YouTube rabbit hole the other day, as one does, and landed on a full episode of Computer Chronicles from 1984. For those of you who aren’t ancient, Computer Chronicles was this wonderfully earnest PBS show where hosts in beige suits would discuss the exciting future of things like desktop publishing and 1200-baud modems. This particular episode was about computer security. And it ruined my week.
The episode brought on a few experts who laid out the fundamental principles for keeping your computer systems safe. Their advice, delivered with the straight-faced seriousness of a Cold War broadcast, was this:
-
Don’t be intimidated by computers. You can understand the basics, and there are experts to help.
-
Make a realistic assessment of your risk. Don’t panic about every theoretical threat; focus on what’s likely and what the actual impact would be.
-
Have a good disaster recovery plan, which mostly means just having effective, regular backups.
-
Use security checklists and frameworks to make sure you’re covering the basics.
-
Train your users on your security policies, because they are, and always will be, the weakest link.
I sat there, staring at my monitor like a deer in the headlights of time itself, and I had to pause the video. Does any of that sound familiar? Maybe? A little? Here’s a hint: It’s the exact same fucking advice we give clients today. It’s the same five points you’ll find on a thousand corporate blogs and in a hundred keynote presentations at security conferences in 2025. Heck if you go look at Security LinkedIn right now (and I strongly advise against that if at all possible) you will see this in about a billion new and entertaining forms of AI generated gunk. It’s The Standard Thing when people ask for casual advice. Good god, we’ve stood still for so long that the people who invented this crap have died.
The Forty-Year Echo
Forty years. We’ve had forty years of Moore’s Law, the birth of the internet, the dot-com boom and bust, the mobile revolution, and the rise of the cloud. And our fundamental approach to security hasn’t budged one inch.
So, I have to ask: what have we actually been doing this whole time?
… Sadly there is no good answer, it’s not that we’ve been on vacation in the Bahamas.
It feels like we’ve gone nowhere and achieved nothing. Worse, we’re in a much more dangerous position. The attack surface has exploded from a few thousand mainframes to billions of interconnected devices. And thanks to our own glorious advancements (Bitcoin and all that came with it forever fulfilled Goldfinger’s wish that there be innovation in crime a billion times over), cybercrime is more profitable than ever. It’s projected to cost the world $10.5 trillion annually by 2025, a figure so vast it represents what experts call “the greatest transfer of economic wealth in history.”
And yet, we’re still fighting the same battles. People are still terrified of the technology they’re forced to use every day. Businesses are still monumentally bad at assessing their own risk, with many failing to prevent attacks simply because they underestimate the threat. People and corporations alike still don’t make reliable backups. And users? According to Verizon’s latest data, the “human element” is still a factor in 68% of all breaches. We’ve spent four decades doing the same thing over and over, expecting a different result. It’s not working. We’re literally getting to Vaas in Farcry 3 territory here.
The Path Not Taken: Predictability
During that same Computer Chronicles episode, security pioneer Donn Parker said something that really stuck with me: “If a program is not predictable, that is if we don’t know what it does under all circumstances, then we have to assume it is not secure.”
For decades, we’ve dismissed this as a utopian fantasy. Modern software (ie: your OS, your browser, anything else eating up massive quantities of RAM) is a monstrous layer cake of complexity. That complexity is what gives us fancy graphics and streaming video, but it also gives us fingerprinting, side-channel attacks, and the entire surveillance capitalism economy. Predictability was a pipe dream.
Except it isn’t anymore. We are now, finally, at a point where this is genuinely possible. Memory-safe languages like Rust allow us to build complex systems that are, in a very real sense, predictable. We can write code where an entire class of vulnerabilities is simply impossible. We can build secure systems. We just choose not to. We don’t want to give up the features. Both because we just love convenience, and because the entire economy will probably implode if we stop spying on people.
Imagine a world where your bank was held liable for scams that its own app and website architecture made possible. Imagine a world where Microsoft was on the hook for the damages caused by every vulnerability it shipped. We have the technical means to start moving in that direction… and yet…
Sprinting Towards the Dumpster Fire
Instead, we’ve decided to sprint full-pelt the other way, directly into the warm, fuzzy, and frankly idiotic arms of AI.
AI is, by its very nature, unpredictable. Not because it’s magical, but because the methods by which inputs map to outputs are a black box. And we’re jamming this technology into the absolute core of our lives. I mean we say the damn things “Hallucinate” because they invent… well basically everything, and yet Google is now basically all bot? Oh and they’re also bringing that to the browser, how exciting.
And the master plan for connecting all this stuff together is even worse. They’re pushing something called the Model Context Protocol (MCP), an “open standard” for letting AI agents poke around in external tools and data. It’s being sold as a universal adapter for AI, but it’s a security dumpster fire, creating a standardized way to introduce a whole new class of vulnerabilities like prompt injection and supply chain attacks. Never mind the fact that it fails to take in about forty years (gosh that number seems to appear quite often doesn’t it?) of hard learnt lessons in distributed computing by just assuming the wacky bullshit bot will somehow handle it. It’s the confused deputy problem on a global, AI-powered scale.
This move toward AI-integrated systems makes everything inherently less secure, and that’s before you even consider AI as a force multiplier for evil. The scale of scams, misinformation, and social engineering that can be automated with real-time generated voices, images, and text is going to make the last forty years look like a golden age of digital tranquility.
So, What’s the Good News?
The future is going to have more breaches, more economic damage, and as we connect more critical infrastructure to the network, more real-world physical harm.
So, uh… I guess those of us in computer security are never going to be out of a job. Good for me, I guess. Not so great for everyone else.
Frankly, we’re all going to have to take care of ourselves where we can, and of our families and communities where we have ability. Where the government fails (and capitalism I guess, but who’s surprised about that at this point), we have to group together to support each other ourselves.
If you’re a cyber security pro, you’ve got the skills and the responsibility to help. If not, well it’s never too late to learn. Forty years ago we knew that people get intimidated by these things when they really don’t need to be, and let’s be fair, nothing new has really happened over those forty years… so you don’t have to run too fast to catch up.