Web2’s Lesson for AI: Decentralize to Protect Humanity
This is going to sound presumptuous coming from a guy who doesn’t write code, let alone have any direct experience in machine learning or artificial intelligence research.
But I gotta say it: The recent alarmist demand for a six-month pause or even a militarily enforced shutdown in AI research – from people with experience, money and influence in the artificial intelligence industry – is founded on some fundamentally flawed thinking that will encourage the same destructive outcome for humanity that we seek to avoid. That the U.S. government is simultaneously orchestrating a crackdown on the crypto industry, a field of open-source innovation that develops the kind of cryptography and network coordination technologies needed to manage AI threats, makes this an especially dangerous moment for all of us.
These doomsayers are computer scientists, not students of economic history. The issue is not, in and of itself, that an out-of-control AI could evolve to kill us all. (We all know that. For decades, Hollywood has taught us that it is so.) No, the task is to ensure that the economics of AI don’t intrinsically encourage that horrific result. We must prevent concentrated control of the inputs and outputs of AI machines from hindering our capacity to act together in the common interest. We need collective, collaborative software development that creates a computational antidote to these dystopian nightmares.
The answer does not lie in shutting down AI innovation and locking ChatGPT creator OpenAI, the industry leader that has taken the field to its current level of development, into pole position. On the contrary, that’s the best way to ensure the nightmare comes true.
Web2’s lessons
We know this from the debacle of Web2, the ad-driven, social platform-based economy in which the decentralized Web1 internet was re-centralized around an oligarchy of all-knowing>“surveillance capitalism,” and we humans became its victims, a passive source of personal data that is extracted from us and recycled into behavior-modifying algorithms.
All of this happened, not because the platforms were morally inclined to abandon Google’s “Don’t Be Evil” maxim, but because the logic of the market pushed them into this model. Ads provided the revenue, and an ever-growing pool of users on their platforms provided the data with which the internet titans could shape human behavior to maximize returns on those ads. Shareholders, demanding those exponential gains continue, pressured them to double down on this model to “meet the number” each quarter. As network effects kicked in and the platforms attracted more users through a self-reinforcing growth function, the data extraction models became more lucrative and more difficult to abandon as Wall Street’s expectations were benchmarked ever higher.
This exploitative system will go into overdrive if AI development occurs under the same monopoly defaulting structure. The solution is not to halt the research but to incentivize AI developers to devise ways to subvert that model.
Updating capitalism
For centuries, the market capitalism system encouraged competition among entrepreneurs for market share, generating wealth and productivity for all. It fostered wealth disparities but, in the long run, with the help of antitrust, union and social safety net laws, it produced unprecedented gains in wellbeing worldwide.
But the system was built for an analog economy, one that revolved around the production and sale of physical things, a world where the constraints of geography put a burdensome cost of capital on growth opportunities. The internet age is very different. It’s one of self-reinforcing network effects, where the efficiencies of software production allow market leaders to rapidly expand market share at very low marginal cost, and where the most valuable commodity is not physical, such as iron ore, but intangible – it’s human data.
We need a new model of decentralized ownership and consensus governance, one that’s built on incentives for competitive innovation but has, within it, a self-correcting framework that drives that innovation toward the public good.
Inspired by Jacob Steeves, a founder of decentralized AI-development protocol Bittensor, I believe crypto technology can help define what that future looks like – even if we need guardrails for it.
“We’re saying let’s build open ownership of AI,” Steeves said of Bittensor’s tokenized decentralized building model for AI on this week’s “Money Reimagined” podcast. “If you can contribute, you can own it. And then, let’s let the people decide.”
The philosophical idea is that sufficiently decentralized ownership and control would prevent any single party from dictating AI development and that, instead, the group as a whole would opt for models favorable to the collective. “If we all had a piece of this, then this thing is not going to come back and hurt us because at the end of the day the fundamental currency of AI, the fundamental ownership part is in your wallet,” Steeves said.
Overly utopian? Maybe. The long list of scams in crypto history mean that many will instinctively imagine a crypto-AI model being hijacked by nefarious actors.
But if we’re going to forge a common project based on open-source innovation and collective governance, the economic phenomena that look most like what we need are the ecosystems that have sprung up around blockchain protocols.
“Ethereum and Bitcoin are the largest supercomputers in the world, measured in hashes,” Steeves said. “Those networks – for good or ill, whether or not you position yourself on either side of the power debate – are mega structures. They are the largest computing mega structures that humanity has ever created … they’re hundreds of times larger than the data warehouses of companies like Google.”
Regulatory capture
OpenAI, the company behind the ChatGPT and GPT1-through-4 large language models (LLM), is structured very differently from those blockchain ecosystems. It’s a private company, one that has just taken in a $10 billion (with a B) investment from tech giant Microsoft.
And while its CEO Sam Altman has not, as yet, joined Tesla CEO and OpenAI investor Elon Musk as one of more than 25,000 signatories to an open letter calling for a six-month pause in AI development, many believe that if that letter’s demands were implemented, the company would be a direct beneficiary, making it harder for any competitor to challenge OpenAI’s dominance while earning Altman’s company control of AI development going forward.
“The letter serves to rally public support for OpenAI and its allies as they consolidate their dominance, build an extended innovation lead and secure their advantage over a technology of fundamental importance to the future,” wrote cryptocurrency pioneer Peter Vessenes in a CoinDesk op-ed this week. “If this occurs, it will irreparably harm Americans – our economy and our people.”
Imagine if, Vessenes wrote, “in 1997, Microsoft and Dell had issued a similar ‘pause’ letter, urging a halt to browser innovation and a ban on new e-commerce sites for six months, citing their own research that the internet would destroy brick-and-mortar stores and aid terrorist finance. Today we’d recognize this as self-serving alarmism and a regulatory capture attempt.”
OpenAI is now based on a closed system but the LLM approach to machine learning is now out in the wild and being replicated in all sorts of ingenious ways. How on earth is an agreement by U.S. scientists, or even an act of Congress, going to stop this technology’s advance – especially by criminal actors backed by rogue states with every reason to ignore the United States’ entreaties?
Wrong direction
Pair this with the U.S. government’s recent hostility toward crypto, manifest in the Securities and Exchange Commission’s string of actions against industry leaders and in the sanctions against the open-source Tornado Cash protocol, and a worrisome convergence emerges. That crypto companies are now leaving U.S. shores is more than a threat to the digital asset industry. It’s a blow against the very form of open-source innovation that’s needed to avoid AI’s dangerous capture by self-serving centralized interests.
For all the losses suffered by token speculators recently, the waves of money that chased those riches funded some of the biggest leaps in cryptography of all time. Zero-knowledge proofs, for example, likely to play a role in how we protect sensitive information from ubiquitous snooping by AI, have advanced by many magnitudes more than they did in the pre-crypto era.
There’s a wisdom-of-the-crowd advantage, too, that comes from crypto’s permissionless innovation ethos. Non-conforming fringe ideas tend to bubble up more easily than those directed from on high by corporate leadership. OpenAI’s innovation structure is very different from that. Sure, it figured out how to tap into the internet’s massive array of data and how to train an incredibly effective LLM on it. But having abandoned its open-source, nonprofit status, it is now a closed, black box operator, beholden to the profit-maximizing demands of its new corporate investor.
We have a choice: Do we want AI to be captured by the same concentrated business models that took hold in Web2? Or is the decentralized ownership vision of Web3 the safer bet? I know which one I’d pick.