There’s so much that sucks about Large Language Models, machine intelligence, spicy autocorrect, whatever you want to call it.
But perhaps the deepest suck of all is that the worst people in the world control it. It’s Zuck. It’s Elon. It’s the fanboys of the Torment Nexus.
These are the lizard people we’ve handed the keys to the future. Not because we trust them or even like them, but because they were holding them when we woke up after drinking that one light beer with the gritty sediment and weirdly metallic taste last night. And now they get to decide what AI does, who it serves, and who gets turned into Soylent Green.
It feels inevitable, but it doesn’t have to be. We have options, and I’m not talking about boycotting ChatGPT or unplugging from the grid. There is no unplugging from the grid.
It’s an adjacent topic worth its own column, but as Jessica Wildfire explains, we’re never escaping the grid:
“It’s not there in front of us all the time, in the form of a car or a gas station down the street, but it’s there. If someone buys water filters for their rain harvesting system, they live on the grid. They live on the grid if they use hammers and nails from a hardware store. If they use anything that was ever made in a factory, even if it was made decades ago, they live on the grid. If they ever post online, they live on the grid.”
And the grid, the whole fucking planet, is going to be saturated in Mecha-Hitler Clippy clones within three years.
By then, our billionaire weirdoes and would-be tyrants will be trillionaire weirdos and straight-up cosplaying Thanos.
Look to Norway
But this vast, ultimately unknowable alien machine mind we’re building to replace us as a species need not answer solely to Space Karen and the Zuckerborg or the murderous old clams of the CCP. As in so many things—sovereign wealth funds, taxing fossil fuel companies, and nudie runs from the sauna to the ice bath—Norway shows us how.
Schibsted, the Norwegian media and publishing company, built its own AI called NorLLM. But not in any way Silicon Valley would recognise. Unlike the Nazi-curious plagiarism engines of the Valley, NorLLM was trained only on high-quality content; Norwegian language books, both fiction and nonfiction, journalism, and government archives, all with full consent and cooperation from publishers and copyright holders.
No scraping. No copyright side-eye. No training Skynet on some incel neckbeard’s 4Chan posts or the psychic scabs of YouTube comment sections.
But the radical genius of Norwegian AI wasn’t just in asking permission to use the best data, and only that data. It was in the model’s philosophical design.
NorLLM is not a public utility in the sense of being some shiny black box at The Mynistry üf Laptöps. But neither is it Schibsted’s private property in the Silicon Valley sense—that is, a vulture capitalist feeding tube stuck into the hacked open carotid of civil society, sucking out the contents as fast as inhumanly possible.
Instead, it exists in a liminal space between those two poles: a Nodic compromise licensed and governed by the Norwegian University of Science and Technology, which means it’s not owned by the government per se, or Schibsted either, but instead is managed by people with an actual interest in the public interest.
NorLLM is open access, but not open season. Startups, public agencies, universities, and private companies can all use the model under clear licensing terms.
What you can’t do is steal it, defile it, or pump it full of Elon’s brainfarts until it answers every query with anime porn and quotes from Mein Kampf in haiku. It was not built to answer the question, “How do we make this as addictive as high fructose opium and charge a monthly subscription fee?”
In effect, it’s AI as a shared civic resource or public infrastructure. Oversight, governance, and a sense of responsibility are baked into the project’s architecture. It’s also good tech, with twenty-three billion parameters in the first version, and a 40-billion-parameter beast on the whiteboard. And because it’s been built from the ground up with ethics, transparency, and—steady on now—democratic oversight, it offers a vision of AI that isn’t just slightly less bad.
It’s cleaner, smarter and not at all likely to suggest that your depression can be cured by buying some more Dogecoin and gargling ivermectin.
In this way, Norway hasn’t just built a Large Language Model. They’ve built a blueprint for a form of this technology that might lead us away from the Torment Nexus.
Meanwhile in Australia…
As currently imagined by Google, Meta, OpenAI and Grok, AI is a monster that feeds on planetary-scale data lakes while boiling oceans and stealing your IP. But other than these losers’ desperate need to make up for the mean things the cool kids said to them in high school, there is no reason why this tech should be harnessed in the service of gigaprofits and neofascism.
In Australia alone, we have massive troves of high-quality data on which we could train our publicly-controlled AI models, designing them to serve our interests, not Elon’s or the Borg’s.
There’s no reason why permission to use copyrighted materials couldn’t be sought from the rights holders.
There’s no reason to imagine why many, if not most, rights holders would not grant permission for an enterprise in the public good.
I’d happily give a public LLM run by the National Library and CSIRO every word I’d ever written. I suspect most writers would, especially if it promised to undercut the business models of the AI bandits who stole our stuff in the first place.
If our national AI were to connect with publicly owned models in other countries, the data pool would rival the stolen resources used by private concerns, at least in scale. And arguably, it would far surpass them in quality. In that world, creators could be stakeholders. Unions, universities, writers guilds, even local governments or high schools could build their own AI tools, using them not to replace human beings but to support them.
The future is ours
AI is already spreading everywhere because the private sector is rushing to capture the first-mover advantage. Like the internet, it will be unavoidable in three years, and like the net, it will be both great and terrible. Mostly terrible, I suspect, if we follow the current trajectory.
One way to make it less shit is to sweep the legs out from under fuckers like Elon Musk and offer the world a version of the technology which is more aligned to the public interest than whatever form his rapidly evolving psychosis takes next week.
- This post first appeared on John Birmingham’s Alien Sideboob. You can read the original here.
- He no longer tweets, having deleted his account, but you can catch him on Bluesky as Birmo.bsky.social