Opinion | AI & Governance
America's AI Debate Is Missing the Point
Honestly, I'm not sure the following framing is 100% solid, as it may collapse under scrutiny from people who have thought about this longer than I have, but I keep circling back to the same hunch and conclusion.
Most debates about AI in the United States fixate on deep pockets and long zippers — speed, scale, investors, and competition: who has the best models, the biggest chips, the most impressive benchmarks, or the strongest geopolitical position. That framing misses the point. The most consequential impact of AI is not model capability, but how algorithmic systems are already reshaping governance, labor, public narratives, and civic trust, often without democratic accountability.
Have you ever sat in meetings where performance metrics were obsessively tuned while questions about downstream social impact were quietly, if not explicitly, deferred? The optimization vs. responsibility gap is what I see in today's AI debates. It's a classic framing issue that developers or designers working in tech environments would face in practice, not just theoretically. And it usually happens after deployment and before clarity about accountability in case of failure.
Across panels on generative AI, inequality, creativity, climate, and the future of work (often in windowless conference rooms with identical slide decks), a pattern repeats. Big Tech continues to move quickly by aligning with political and military interests, while communities struggle to adapt without losing social cohesion and economic security, or even democratic agency. The result increasingly resembles a form of what I perceive as tech feudalism where platforms consolidate power, extract value, and mediate reality, while the public absorbs risk and disruption. This dynamic is well described in Cory Doctorow's analysis of platform decay and enclosure.
The risks are not hypothetical. Facial recognition systems have been shown, through large-scale evaluations by the National Institute of Standards and Technology (NIST), to misidentify people of color at significantly higher rates. Civil-rights research has further documented how these errors compound when systems are deployed in real-world policing and surveillance contexts.
At scale, such systems enable mass identification and targeting. Meanwhile, algorithmic decision-making increasingly intersects with policing, immigration enforcement, military operations, and welfare systems. Each layer compounds error, opacity (or lack of transparency), and power asymmetry.
In conflict contexts, investigative reporting has described the use of AI-assisted targeting, mechanized killing, and surveillance systems built on large cloud infrastructure contracts, illustrating how machine inference can be operationalized in military decision-making with limited transparency or public oversight. Large cloud providers now hold long-term contracts with military and security agencies (domestic and foreign) for data storage, computer vision, and automated analysis, raising questions about how AI infrastructure is governed once deployed in conflict or enforcement settings. Some reports further describe AI systems that generate target lists or track individuals' movement patterns with constrained human review, highlighting risks of error propagation and accountability when such systems are integrated into kinetic decision-making pipelines.
I should pause here to highlight a little unease that everyone is familiar with. In practice, decisions get made under time pressure and then rationalized afterward. This is when accountability slips in the messiness, knowing that nobody intends harm and incentives quietly reward speed over care. Life rarely deploys in neat stages. Same with governance.
Much of the public discourse frames AI as a race, especially between the United States and China. Comparative research highlights the intensity of this competition (here).
But competition rhetoric obscures deeper failures. Despite extraordinary innovation capacity, the U.S. has not translated technological leadership into broad social benefit. Job displacement looms as automation accelerates, and research from the MIT Work of the Future Task Force warns that AI could widen economic gaps if governance fails to keep pace.
China's system-wide integration of AI across surveillance, education, healthcare, and social control is rightly criticized. Yet it is at least explicit about its governing logic. By contrast, the United States increasingly wraps similar capabilities in the language of convenience, personalization, market choice, or whatever suits the goals of a handful of tech leaders. Freedom risks becoming a hollow symbol while surveillance expands quietly through contracts, platforms, and procurement pipelines.
This is why the AI debate must shift from capability to accountability. The central questions are civic, not technical. Who governs algorithmic systems that shape public life? Who audits them? Who benefits when AI systems influence elections, labor markets, public discourse, or military decisions? Which political or economic interests fund and steer these systems, and whose values are encoded by default?
Cities and states could lead where federal action lags. Public-interest audits of algorithmic systems, transparent data inventories, and enforceable provenance standards for media and political content are not radical ideas. NYC has already begun experimenting with algorithmic accountability programs. Technical standards such as the Coalition for Content Provenance and Authenticity (C2PA) offer practical mechanisms for tracing and verifying digital media. These approaches contrast with the largely voluntary guidance outlined in the recent U.S. Executive Order on AI.
Innovation should strengthen democratic capacity rather than erode it. Asking whether AI will “disrupt” society is the wrong question. The better one is whether it will reinforce or weaken the institutions that allow societies to govern themselves. That answer will not come from investor pitch decks or celebrity technologists. It depends on incentives, regulation, and whether public institutions retain enough authority to set boundaries.
A society that cannot trust its own memory (its records, images, narratives, and shared facts) cannot govern itself for long. AI systems that increasingly shape those memories undermine foundational accountability by making it optional. The future of AI leadership will not be decided solely by faster chips or larger models, but by whether technological power is aligned with civic responsibility.
If the United States wants to lead, it must stop treating governance as a bottleneck and start treating it as infrastructure. Making systems smarter is starting to fast become an obnoxious cluster headache: how smart is smart, and what for anyway? The real test of AI will be whether or not it makes societies smart enough to be more just, resilient, and capable of self-direction.