American AI firms try to poke holes in disruptive DeepSeek

SAN FRANCISCO, Jan 28 – Developers at leading U.S. AI firms are praising the DeepSeek AI models that have leapt into prominence while also trying to poke holes in the notion that their multi-billion dollar technology has been bested by a Chinese newcomer’s low-cost alternative.

Chinese startup DeepSeek on Monday sparked a stock selloff and its free AI assistant overtook OpenAI’s ChatGPT atop Apple’s (AAPL.O)

, opens new tab App Store in the U.S., harnessing a model it said it trained on Nvidia’s (NVDA.O)

, opens new tab lower-capability H800 processor chips using under $6 million.

As worries about competition reverberated across the U.S. stock market, some AI experts applauded DeepSeek’s strong team and up-to-date research but remained unfazed by the development, said people familiar with the thinking at four of the leading AI labs, who declined to be identified as they were not authorized to speak on the record.

OpenAI CEO Sam Altman wrote on X that R1, one of several models DeepSeek released in recent weeks, “is an impressive model, particularly around what they’re able to deliver for the price.” Nvidia said in a statement DeepSeek’s achievement proved the need for more of its chips.

Software maker Snowflake (SNOW.N)

, opens new tab decided Monday to add DeepSeek models to its AI model marketplace after receiving a flurry of customer inquiries.

With employees also calling DeepSeek’s models “amazing,” the U.S. software seller weighed the potential risks of hosting AI technology developed in China before ultimately deciding to offer it to clients, said Christian Kleinerman, Snowflake’s executive vice president of product.

“We decided that as long as we are clear to customers, we see no issues supporting it,” he said.

Meanwhile, U.S. AI developers are hurrying to analyze DeepSeek’s V3 model. DeepSeek in December published a research paper accompanying the model, the basis of its popular app, but many questions such as total development costs are not answered in the document.

China has now leapfrogged from 18 months to six months behind state-of-the-art AI models developed in the U.S., one person said. Yet with DeepSeek’s free release strategy drumming up such excitement, the firm may soon find itself without enough chips to meet demand, this person predicted.

DeepSeek’s strides did not flow solely from a $6 million shoestring budget, a tiny sum compared to $250 billion analysts estimate big U.S. cloud companies will spend this year on AI infrastructure. The research paper noted that this cost referred specifically to chip usage on its final training run, not the entire cost of development.

The training run is the tip of the iceberg in terms of total cost, executives at two top labs told Reuters. The cost to determine how to design that training run can cost magnitudes more money, they said.

The paper stated that the training run for V3 was conducted using 2,048 of Nvidia’s H800 chips, which were designed to comply with U.S. export controls released in 2022, rules that experts told Reuters would barely slow China’s AI progress.

Sources at two AI labs said they expected earlier stages of development to have relied on a much larger quantity of chips. One of the people said such an investment could have cost north of $1 billion.

Some American AI leaders lauded DeepSeek’s decision to launch its models as open source, which means other companies or individuals are free to use or change them.

“DeepSeek R1 is one of the most amazing and impressive breakthroughs I’ve ever seen – and as open source, a profound gift to the world,” venture capitalist Marc Andreessen said in a post on X on Sunday.

The acclaim garnered by DeepSeek’s models underscores the viability of open source AI technology as an alternative to costly and tightly controlled technology such as OpenAI’s ChatGPT, industry watchers said.

Wall Street’s most valuable companies have surged in recent years on expectations that only they had access to the vast capital and computing power necessary to develop and scale emerging AI technology. Those assumptions will come under further scrutiny this week and the next, when many American tech giants will report quarterly earnings.

Sign up here.

Reporting by Anna Tong, Jeffrey Dastin and Kenrick Cai in San Francisco and Katie Paul in New York; Editing by Noel Randewich and Christopher Cushing

Our Standards: The Thomson Reuters Trust Principles.

, opens new tab

Kenrick Cai is a correspondent for Reuters based in San Francisco. He covers Google, its parent company Alphabet and artificial intelligence. Cai joined Reuters in 2024. He previously worked at Forbes magazine, where he was a staff writer covering venture capital and startups. He received a Best in Business award from the Society for Advancing Business Editing and Writing in 2023. He is a graduate of Duke University.

Anna Tong is a correspondent for Reuters based in San Francisco, where she reports on the technology industry. She joined Reuters in 2023 after working at the San Francisco Standard as a data editor. Tong previously worked at technology startups as a product manager and at Google where she worked in user insights and helped run a call center. Tong graduated from Harvard University.

Jeffrey Dastin is a correspondent for Reuters based in San Francisco, where he reports on the technology industry and artificial intelligence. He joined Reuters in 2014, originally writing about airlines and travel from the New York bureau. Dastin graduated from Yale University with a degree in history. He was part of a team that examined lobbying by Amazon.com around the world, for which he won a SOPA Award in 2022.

Leave a Reply

Your email address will not be published. Required fields are marked *