{"text":[[{"start":6.64,"text":"The writer is a senior fellow at the Foundation for American Innovation and was lead staff writer of the Trump administration’s AI Action Plan "}],[{"start":17.53,"text":"On March 4, the US Department of Defense took an unprecedented move against an American company: designating the frontier AI start-up Anthropic a “supply chain risk”. Typically, this designation is applied to technology from foreign-adversary countries. In this instance, it was invoked over a contract dispute."}],[{"start":40.13,"text":"The conflict, which was largely blocked by a judge in California last week, centred on the question of where control over AI should rest. Neither side had the answer quite right."}],[{"start":52.38,"text":"Trump administration officials sought to renegotiate the terms of the Pentagon’s contract to use Anthropic’s Claude — the only large language model certified for use in classified US military contexts — not because they intended to violate the company’s red line on lethal autonomous weapons and mass surveillance, they say, but because they believe only US law should limit the military’s use of technology."}],[{"start":78.92,"text":"The principle is reasonable enough. But, as Judge Rita Lin stated, the proposed punishment of Anthropic was “arbitrary and capricious”. The correct solution would have been to cancel the contract and pass laws concerning the government’s use of AI systems. "}],[{"start":96.76,"text":"The ruling is not a final decision and will probably be appealed by the Trump administration. But the issue raises a broader set of challenges that governments and citizens will grapple with for decades to come: where, precisely, should the locus of control over powerful AI systems rest? Should private companies be able to set ethical boundaries for the AI systems that may one day underpin our lives?"}],[{"start":125.86000000000001,"text":"Some liken advanced AI to nuclear weapons and conclude that no technology so powerful should rest in private hands. But there are crucial differences between the two. Early iterations of the atomic bomb did not provide the consumer and commercial benefits that many derive from today’s AI. "}],[{"start":145.3,"text":"The notion of a government passing laws that dictate the moral, ethical and philosophical values of AI systems therefore appears as a stark violation of the principle of free speech that underlies democratic nations. The prospect of nationalisation of AI labs — which is the logical endpoint of the “nuclear weapons” analogy — seems like a profound and radical act of tyranny."}],[{"start":173.28,"text":"But herein lies the central challenge of AI governance: the “nuclear weapons” analogists may not be correct but they are right to be concerned. Advanced AI systems really do pose serious risks to national security. Frontier models from the biggest US AI companies are classified, by the companies’ own admission, as having high risk for cyber attacks and assistance in the creation of bioweapons, for example. And as the US defence department’s own usage makes clear, AI — not some future version of the technology but the systems we have today — can assist in creating lethal outcomes."}],[{"start":215.91,"text":"The good news is that advanced capitalist societies have dealt with this sort of challenge before. Our civilisations rest upon successive generations of foundational technologies — the printing press, banking, the automobile, electricity and the computer itself. All of these are technologies without which it is hard to imagine a modern military, and thus are essential for national security. Yet none, at least in the US, have been nationalised. Instead, the technologies and industries are overseen by political, legal, regulatory and technical institutions — often a hybrid of public and private bodies."}],[{"start":258.96,"text":"Erecting something similar for AI is perilous. The Pentagon’s dispute with Anthropic shows that even an administration that brands itself as pro-AI can easily veer into regulatory over-reach. The chances of erring in a way that stifles innovation are high. Time, in this AI race, is a resource even more scarce than computing power. "}],[{"start":292.03,"text":""}]],"url":"https://audio.ftcn.net.cn/album/a_1774856645_6864.mp3"}