The Pentagon-Anthropic dispute is a test of control - FT中文网
登录×
电子邮件/用户名
密码
记住我
请输入邮箱和密码进行绑定操作:
请输入手机号码,通过短信验证(目前仅支持中国大陆地区的手机号):
请您阅读我们的用户注册协议隐私权保护政策,点击下方按钮即视为您接受。
FT商学院

The Pentagon-Anthropic dispute is a test of control

Should private companies be able to set boundaries around the AI systems we integrate into our lives?
00:00

{"text":[[{"start":6.64,"text":"The writer is a senior fellow at the Foundation for American Innovation and was lead staff writer of the Trump administration’s AI Action Plan "}],[{"start":17.53,"text":"On March 4, the US Department of Defense took an unprecedented move against an American company: designating the frontier AI start-up Anthropic a “supply chain risk”. Typically, this designation is applied to technology from foreign-adversary countries. In this instance, it was invoked over a contract dispute."}],[{"start":40.13,"text":"The conflict, which was largely blocked by a judge in California last week, centred on the question of where control over AI should rest. Neither side had the answer quite right."}],[{"start":52.38,"text":"Trump administration officials sought to renegotiate the terms of the Pentagon’s contract to use Anthropic’s Claude — the only large language model certified for use in classified US military contexts — not because they intended to violate the company’s red line on lethal autonomous weapons and mass surveillance, they say, but because they believe only US law should limit the military’s use of technology."}],[{"start":78.92,"text":"The principle is reasonable enough. But, as Judge Rita Lin stated, the proposed punishment of Anthropic was “arbitrary and capricious”. The correct solution would have been to cancel the contract and pass laws concerning the government’s use of AI systems. "}],[{"start":96.76,"text":"The ruling is not a final decision and will probably be appealed by the Trump administration. But the issue raises a broader set of challenges that governments and citizens will grapple with for decades to come: where, precisely, should the locus of control over powerful AI systems rest? Should private companies be able to set ethical boundaries for the AI systems that may one day underpin our lives?"}],[{"start":125.86000000000001,"text":"Some liken advanced AI to nuclear weapons and conclude that no technology so powerful should rest in private hands. But there are crucial differences between the two. Early iterations of the atomic bomb did not provide the consumer and commercial benefits that many derive from today’s AI. "}],[{"start":145.3,"text":"The notion of a government passing laws that dictate the moral, ethical and philosophical values of AI systems therefore appears as a stark violation of the principle of free speech that underlies democratic nations. The prospect of nationalisation of AI labs — which is the logical endpoint of the “nuclear weapons” analogy — seems like a profound and radical act of tyranny."}],[{"start":173.28,"text":"But herein lies the central challenge of AI governance: the “nuclear weapons” analogists may not be correct but they are right to be concerned. Advanced AI systems really do pose serious risks to national security. Frontier models from the biggest US AI companies are classified, by the companies’ own admission, as having high risk for cyber attacks and assistance in the creation of bioweapons, for example. And as the US defence department’s own usage makes clear, AI — not some future version of the technology but the systems we have today — can assist in creating lethal outcomes."}],[{"start":215.91,"text":"The good news is that advanced capitalist societies have dealt with this sort of challenge before. Our civilisations rest upon successive generations of foundational technologies — the printing press, banking, the automobile, electricity and the computer itself. All of these are technologies without which it is hard to imagine a modern military, and thus are essential for national security. Yet none, at least in the US, have been nationalised. Instead, the technologies and industries are overseen by political, legal, regulatory and technical institutions — often a hybrid of public and private bodies."}],[{"start":258.96,"text":"Erecting something similar for AI is perilous. The Pentagon’s dispute with Anthropic shows that even an administration that brands itself as pro-AI can easily veer into regulatory over-reach. The chances of erring in a way that stifles innovation are high. Time, in this AI race, is a resource even more scarce than computing power. "}],[{"start":292.03,"text":""}]],"url":"https://audio.ftcn.net.cn/album/a_1774856645_6864.mp3"}

版权声明:本文版权归FT中文网所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。

元首关系紧张,美英安全合作出现裂痕

英美围绕伊朗战争出现分歧,正在冲击两国外交人员、官员以及军方人员之间的工作关系。

FT社评:全球贸易保卫战中的“中间力量缺位”

有关取代美国、寻找多边体系之锚的讨论没有得出什么实际成果。

伊朗战争切断天然气供应后亚洲国家纷纷转向煤炭

海湾供应趋紧之际,各国无视环保忧虑,重启并加大使用高污染燃料。

美国豪掷数十亿美元押注尚未验证的稀土企业

在推进这一关键金属布局之际,多家与特朗普政府相关人士存在财务联系的公司拿到了大额融资支持。

困境债基金瞄准私募信贷低迷

私募信贷承压之际,投资者预计将迎来自2008年以来的最大机遇。

战争打乱消费品巨头的降价计划

通胀再度飙升的阴影逼近,让本已举步维艰的消费品企业面临艰难抉择。
设置字号×
最小
较小
默认
较大
最大
分享×