The intersection of artificial intelligence and public policy has reached a critical juncture as former presidential candidate Andrew Yang renews his warnings about the impending automation of the American workforce. Speaking at a recent forum on tech ethics, Yang emphasized that the speed of generative AI adoption is outpacing the government’s ability to provide a social safety net. His signature proposal of universal basic income has found a second life in these discussions, as labor experts observe significant disruptions in white-collar sectors that were previously thought to be immune to technological displacement.
Yang argues that the current wave of innovation is fundamentally different from the industrial revolutions of the past. While previous shifts replaced physical labor with machines, the current era targets cognitive tasks, threatening millions of roles in data entry, legal research, and creative services. The former candidate noted that the lack of preparation among federal lawmakers could lead to a period of unprecedented social instability if the gains from AI are not redistributed to the workers whose roles are being phased out.
While Yang focuses on the domestic labor market, a separate battle is brewing in the defense sector involving one of the industry’s most prominent startups. Anthropic, the San Francisco-based AI firm known for its focus on safety and constitutional design, is reportedly caught in a tug-of-war with the Pentagon over the military application of its large language models. The Department of Defense has expressed significant interest in integrating advanced AI into tactical decision-making and logistics, but Anthropic has maintained a cautious stance, citing ethical concerns and the potential for unintended escalation in autonomous systems.
This tension highlights a growing rift between Silicon Valley’s safety-first culture and the strategic imperatives of national security. The Pentagon is concerned that overly restrictive safety protocols could allow adversaries to gain a technological edge. Conversely, Anthropic leadership has voiced concerns that deploying unproven models in high-stakes military environments could lead to catastrophic errors. The outcome of these negotiations will likely set a precedent for how private AI companies interact with the military-industrial complex in the coming decade.
On the local political front, the conversation around technology and urban management is heating up in New York City. Zohran Mamdani, who has emerged as a vocal challenger in the upcoming mayoral race, is centering his platform on a critique of how the current administration utilizes tech resources. Mamdani has been particularly critical of the city’s reliance on expensive, automated solutions for public services, arguing that these investments often come at the expense of human-centric social programs.
The assemblyman’s entrance into the mayoral fray introduces a populist tech skepticism that mirrors some of the broader national anxieties voiced by Yang. Mamdani has proposed a more rigorous auditing process for algorithmic tools used by the NYPD and the Department of Social Services, claiming that without transparency, these systems exacerbate existing systemic biases. His campaign seeks to redefine the ‘smart city’ concept as one that prioritizes digital literacy and public ownership over private vendor contracts.
These three developments—Yang’s labor warnings, Anthropic’s defense standoff, and Mamdani’s local policy shift—illustrate a world struggling to catch up with its own innovations. Whether it is the federal government’s response to job loss or a city’s approach to administrative automation, the common thread is a desperate need for a new social contract that accounts for the power of algorithms. As AI continues to permeate every facet of public and private life, the voices of critics and cautious developers are becoming just as influential as the engineers building the technology itself.
