Big Tech is moving on from the DeepSeek shock - FT中文网
登录×
电子邮件/用户名
密码
记住我
请输入邮箱和密码进行绑定操作:
请输入手机号码,通过短信验证(目前仅支持中国大陆地区的手机号):
请您阅读我们的用户注册协议隐私权保护政策,点击下方按钮即视为您接受。
观点 人工智能

Big Tech is moving on from the DeepSeek shock

The industry is turning to packaging of AI technologies rather than focusing on model training

Remember when China’s DeepSeek sent tremors through the US artificial intelligence industry and stunned Wall Street? That was last month. To listen to AI executives and investors now, you might think the world has moved on. Nvidia, the hardest hit, has recovered more than half the $630bn it lost.

The speed with which equilibrium has returned owes a lot to the assertion by the biggest US tech companies that they will spend even more than expected on AI infrastructure this year. But it also shows how quickly the investment case for AI has been rewritten. The question is how much this reflects a genuine change in outlook, and how much is just industry spin.

The case for buying Nvidia stock once rested on claims such as those from Anthropic chief executive Dario Amodei, who barely six months ago predicted that the training costs for a cutting-edge large language model would soon reach $100bn. In the wake of DeepSeek, Amodei is still anticipating a huge jump in demand for AI chips — only now, it is for the completely different reason that they are needed for more complex tasks like reasoning, rather than the costs of model training.

No wonder investors are feeling an acute whiplash and a greater sense of uncertainty about the sustainability of the AI boom.

The Chinese company’s breakthroughs increased the risk that even the most advanced large language models will quickly be turned into commodities. This came just as model-builders were facing another existential threat: throwing ever-greater amounts of computing power into training no longer produces the advances it once did.

OpenAI chief executive Sam Altman signalled the obvious strategic response in a post on X this week. No longer will OpenAI release its large language models as standalone products. Rather, they will be packaged together with its other technologies, such as “reasoning”, into more complete systems. From now on, he said, the AI will “just work”, whatever task a user throws at it. 

This is a familiar strategy in the tech industry. Moving “up the stack” — building more valuable technologies on the foundation of earlier products as they are commoditised — has long been seen as the way to defend prices and profit margins. If the cost of components that once provided a good margin collapse, so much the better: it brings down the overall cost and leads to faster uptake.

This packaging of AI technologies has important implications for the direction of the whole industry. One is that, as companies such as OpenAI build more complete systems, a gap will open up at the bottom of the market for companies like DeepSeek.

Anyone wanting to build their own AI-powered software will turn to large language models such as Meta’s Llama and DeepSeek’s R1 — technologies that are released in a version of open source that makes them freely available and cheap. This should open the way for many more tech companies to join in the AI boom. But former Google chief executive Eric Schmidt warned this week it could pose a challenge to the west, making the Chinese company an important global platform in AI.

Another implication is that AI infrastructure suppliers need to quickly adjust their offerings — and their sales pitches. Spending will no longer be so heavily skewed towards big clusters of chips for training ever-larger models.

Nvidia, which soared in value on the boom in training, still has the widest array of silicon for AI and will be working hard to optimise its chips for the many different workloads that will emerge as the market shifts. But the move beyond intensive training should lead to a wider range of technology suppliers fighting over a much more disparate market.

A third implication is that the continuation of the AI boom will depend much more on the actual usage of AI, not just the massive upfront spending that has gone into building models and infrastructure. Much of the computing power that goes into reasoning is a variable cost incurred after a prompt has been entered, rather than the kind of one-off fixed costs that go into training. The AI companies need to show they can provide real value to end customers 

None of these forces are new in an industry that was already under pressure to move faster in commercialising its technology. But the DeepSeek shock has just turned up the pressure.

richard.waters@ft.com

版权声明:本文版权归FT中文网所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。

没有安全带和铝制油箱:F1赛车75年来的变迁

记者探讨了自体育运动起源于银石以来,赛车如何从最初的样貌演变为如今配备众多传感器的高科技赛车。

承诺五天带来四十年幸福感的“神经黑客”训练营

你真的能在一周内获得40年的精神启蒙吗?

美国工业集团在人工智能热潮中转向数据中心

公司正寻求在这个快速增长的行业中分得数千亿美元投资的一杯羹。

错误的数据导致错误的政策

利用官方统计数据来训练人工智能,有助于确保高质量的信息为公共生活服务。

随着美国人抛弃甜食,大型食品公司的零食狂潮逐渐瓦解

公司斥资数十亿美元收购零食品牌以推动增长,但如今销售额却在下滑。

展现脆弱是有回报的,但请把握好时机

如果负责人能够有策略地面对自己的弱点,承认自己感到不堪重负,其实正展现了人性。
设置字号×
最小
较小
默认
较大
最大
分享×