


Why AI Hardware, Not Just Bigger Models, Will Define The Future Of AI
In 2024, less than 10% of total AI investment went toward infrastructure, according to Bain & Company. The lion’s share went to foundation models and synthetic content tools — technologies that are easier to build, faster to demo and more media-friendly.
But experts say that lopsided bet is beginning to show its limits.
One of those experts is Zhou Shaofeng, the founder and chairman of Xinghan Laser, a deep tech company based in Shenzhen that builds everything from semiconductor laser chips to industrial LiDAR systems. Shaofeng believes the next phase of AI won’t be decided by who has the biggest model, but by who can bring intelligence into physical systems with precision, durability and real-world feedback. But why does he believe this and what does it mean? For one, it’s because “we’re getting too obsessed with the models,” he noted.
The Model Obsession Is Blinding Us
For the last three years, AI conversations have mostly orbited around models — who trained the biggest, who released the flashiest demo, who hit 100 million users fastest and much more. But no matter how sophisticated an AI model is, it won’t run a surgical robot or autonomous car without hardened sensors, embedded compute, and high-fidelity perception systems.
“Real intelligence isn’t just about prediction,” Shaofeng told me in an interview. “It’s about perception, interaction and action. And that all starts at the hardware level.”
This kind of intelligence — embedded in machines that must operate in unpredictable, high-stakes environments — requires far more than clever code. It demands hardware that can process data in real time, respond to feedback and withstand harsh conditions. And yet, that’s precisely the layer being overlooked in today’s AI discourse, according to Shaofeng.
The Infrastructure Funding Gap
Yet, funding tells a different story. As the earlier report from Bain & Company revealed, investments in AI infrastructure — including hardware, edge systems, and embedded AI — accounted for less than 10% of total AI capital allocation in 2024. Meanwhile, foundation models and synthetic content tools continued to absorb the lion’s share.
According to Shaofeng, deep tech often fails the venture capital test and it’s easy to see why: software is faster to build, easier to demo and simpler to pivot. Deep tech, on the other hand, demands patience: long R&D cycles, high technical uncertainty and few short-term wins — a journey that most don’t have the bandwidth to go through.
That leaves governments and a handful of tech giants to pick up the slack. Companies like Tesla and NVIDIA are building vertically integrated AI stacks not because it’s cheap, but because it’s necessary.
Real Economic Limits
It’s not just investors who are skittish. Deploying AI in physical environments —including factories, vehicles, hospitals and more — comes with real costs. And Shaofeng points out that these aren’t theoretical barriers, but real problems that many are trying so hard to address today.
“The real bottlenecks aren’t technical,” he noted. “They’re economic. Hardware like sensors or laser modules isn’t cheap. Integration, testing, compliance — those take time. The question is no longer just ‘Can it work?’ but ‘Can it work fast enough, reliably enough, and with ROI that justifies the investment?’”
His argument is that while the cost of embedding intelligence into a factory floor is often far higher than shipping a new SaaS product, the payoff, when done right, is far greater and far more defensible.
Betting On Hardware
Shaofeng’s company, Xinghan Laser, is one of the few building AI directly into high-performance optical systems. From semiconductor laser chips to precision LiDAR platforms, the team is designing systems that don’t just automate processes but also adapt to them. “This is more than just automation,” he said. “It’s about building systems that can learn from the process itself and adjust in real time.
Shaofeng isn’t alone in his thinking about going big on hardware. Just recently, Reuters reported that during a U.S. Senate hearing, OpenAI CEO Sam Altman stressed the urgent need to invest in AI infrastructure — particularly data centers and energy systems — to keep the U.S. at the forefront of global AI leadership. Naveen Verma, professor at Princeton University leading a project to develop new AI chips designed for modern workloads and aiming to run powerful AI systems using significantly less energy, has also noted that current AI chips face barriers related to size, efficiency and scalability.
Undoubtedly, the big bet is now on AI infrastructure and hardware. As a State of AI report by McKinsey, the industries seeing the greatest financial ROI from AI aren’t in creative content or chatbots — they’re in manufacturing, logistics and supply chain. In other words: real-world systems. Systems that require hardware.
A Smarter Future
This is where deep tech fits, not as a rival to model-centric AI, but as its enabler. “Large models and hardware innovation aren’t opposing forces,” Shaofeng said. “They’re mutually reinforcing. One pushes the boundaries of intelligence; the other brings that intelligence to life.” Ignore one and the whole system cracks.
A robot that works perfectly in simulation but fails in the field isn’t just ineffective — it’s dangerous. Whether it’s a surgical tool that misjudges depth or a drone that can’t navigate wind shear, the consequences are not theoretical. They’re real and costly.
Less Hype, More Hardware
To create such a future, Shaofeng said we need to rebalance the equation. AI’s future, he noted, won’t just be written in Python codes — rather, it’ll be soldered in circuits, tuned in optics and tested in physical space.
“Scaling AI isn’t just about compute and data,” he said. “It’s about infrastructure, integration, and real-world relevance. “We see a future where AI doesn’t just analyze the world; it physically engages with it.”
In that future, deep tech isn’t a footnote to the AI story. It’s the foundation.
The above is the detailed content of Why AI Hardware, Not Just Bigger Models, Will Define The Future Of AI. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics











While working on Agentic AI, developers often find themselves navigating the trade-offs between speed, flexibility, and resource efficiency. I have been exploring the Agentic AI framework and came across Agno (earlier it was Phi-

SQL's ALTER TABLE Statement: Dynamically Adding Columns to Your Database In data management, SQL's adaptability is crucial. Need to adjust your database structure on the fly? The ALTER TABLE statement is your solution. This guide details adding colu

The release includes three distinct models, GPT-4.1, GPT-4.1 mini and GPT-4.1 nano, signaling a move toward task-specific optimizations within the large language model landscape. These models are not immediately replacing user-facing interfaces like

Troubled Benchmarks: A Llama Case Study In early April 2025, Meta unveiled its Llama 4 suite of models, boasting impressive performance metrics that positioned them favorably against competitors like GPT-4o and Claude 3.5 Sonnet. Central to the launc

Unlock the Power of Embedding Models: A Deep Dive into Andrew Ng's New Course Imagine a future where machines understand and respond to your questions with perfect accuracy. This isn't science fiction; thanks to advancements in AI, it's becoming a r

Can a video game ease anxiety, build focus, or support a child with ADHD? As healthcare challenges surge globally — especially among youth — innovators are turning to an unlikely tool: video games. Now one of the world’s largest entertainment indus

Simulate Rocket Launches with RocketPy: A Comprehensive Guide This article guides you through simulating high-power rocket launches using RocketPy, a powerful Python library. We'll cover everything from defining rocket components to analyzing simula

Gemini as the Foundation of Google’s AI Strategy Gemini is the cornerstone of Google’s AI agent strategy, leveraging its advanced multimodal capabilities to process and generate responses across text, images, audio, video and code. Developed by DeepM
