


The Chinese Academy of Sciences has made a major breakthrough, breaking the traditional advantages of American AI chips and opening up a new development path for China's chip industry.
In the technological competition of the 21st century, semiconductor technology plays a vital role in determining a country's economic strength and global competitive position. The United States has always been the global leader in semiconductor technology and has been working hard to limit China's development in this field. However, China has not been discouraged. Instead, it has demonstrated strong technological strength and potential. The Chinese Academy of Sciences (CAS) has made a major breakthrough in related semiconductor fields and successfully developed an ultra-highly integrated optical convolution processor. This achievement has attracted widespread attention and praise around the world, and poses a direct challenge to the United States’ dominance in the field of AI chips
What is an optical convolution processor? What are its advantages? Why can it subvert the American AI chip myth? Let’s take a brief look at it
The optical convolution processor is an innovative chip that uses optical signals to perform calculations. In the fields of deep learning and artificial intelligence, optical convolution processors play a core role and can process large amounts of image, voice, video and other data. Compared with traditional electronic computing chips, optical convolution processors have faster computing speed and lower energy consumption. The ultra-highly integrated optical convolution processor combines optical computing technology with the convolution processor, bringing unprecedented computing potential
The Chinese Academy of Sciences said that the ultra-highly integrated optical convolution processor they developed far exceeds NVIDIA's A100 in terms of computing speed, with speed improvements ranging from 1.5 times to 10 times. In addition, the chip consumes less power and has lower production requirements. This means that in artificial intelligence applications, this chip can achieve faster, more power-saving, cheaper, and smaller computing effects
NVIDIA is one of the world's largest GPU manufacturers and one of the most advanced AI chip suppliers. Their A100 chip is widely used in cloud computing, data centers, supercomputers and other fields, and is one of the most powerful AI chips on the market. However, the optical convolution processor of the Chinese Academy of Sciences gave Nvidia's A100 a strong impact because its computing speed is much faster than the latter
The important thing about the Chinese Academy of Sciences’ optical convolution processor is that it can be produced without the need for high-end lithography machines. This is a huge advantage for China, because the most advanced lithography machines in the world are currently owned by the Dutch company ASML. However, the United States has taken various measures to prevent ASML from selling high-end lithography machines to China, aiming to interrupt the lifeline of China's semiconductor technology development. The Chinese Academy of Sciences successfully developed an optical convolution processor, proving China's independent innovation capabilities in the field of new generation semiconductor technology, and also opening up a new path for China to get rid of its dependence on external technology
This breakthrough achieved by the Chinese Academy of Sciences is the result of China’s long-term efforts in the field of semiconductor technology. In the face of the U.S. technological blockade and huge research and development pressure, the Chinese Academy of Sciences and its research team have been unremittingly conducting in-depth research and exploring new technological paths until they finally achieved this major breakthrough. This breakthrough is not only a prescient technology, but also has great value in practical applications. It will not only promote the development of artificial intelligence in China, but also contribute to the progress of global technology and the improvement of human welfare
China’s technological innovation capabilities are gradually rising, which can be confirmed by the optical convolution processor of the Chinese Academy of Sciences. This processor is a typical example of many scientific research achievements, which symbolizes that the gap between China and the world's top level is gradually narrowing. In addition, China continues to make scientific research breakthroughs in other fields. For example, Harbin Institute of Technology has made major breakthroughs in the three core technologies of EUV lithography machines, Huawei has announced a patent for superconducting quantum technology, the Chinese Academy of Sciences has successfully developed 3200 MPa super steel, and SMIC's monthly production capacity has also exceeded 170 million. chips. All these achievements show that China continues to explore, break through and surpass in the field of science and technology
When we look back at the development history of China's semiconductor industry, we can see that there were challenges and difficulties along the way. At first, China just followed trends in technology, but now it has become a technological leader. Behind all this growth, it is precisely because of the suppression and sanctions imposed by the United States and other Western countries that the Chinese people have not surrendered. "The more sanctions, the stronger we become." Now, facing the challenge from the United States, China is ready to face a new round of technological competition. China no longer regards any "myth" as an unattainable goal. China will continue to create more of its own "myths" in the field of science and technology and make greater contributions to the long-term development of the country and the well-being of all mankind. China has the confidence, ability and determination to achieve this goal
The above is the detailed content of The Chinese Academy of Sciences has made a major breakthrough, breaking the traditional advantages of American AI chips and opening up a new development path for China's chip industry.. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics











According to news on November 14, Nvidia officially released the new H200 GPU at the "Supercomputing23" conference on the morning of the 13th local time, and updated the GH200 product line. Among them, the H200 is still built on the existing Hopper H100 architecture. However, more high-bandwidth memory (HBM3e) has been added to better handle the large data sets required to develop and implement artificial intelligence, making the overall performance of running large models improved by 60% to 90% compared to the previous generation H100. The updated GH200 will also power the next generation of AI supercomputers. In 2024, more than 200 exaflops of AI computing power will be online. H200

On June 19, according to media reports in Taiwan, China, Google (Google) has approached MediaTek to cooperate in order to develop the latest server-oriented AI chip, and plans to hand it over to TSMC's 5nm process for foundry, with plans for mass production early next year. According to the report, sources revealed that this cooperation between Google and MediaTek will provide MediaTek with serializer and deserializer (SerDes) solutions and help integrate Google’s self-developed tensor processor (TPU) to help Google create the latest Server AI chips will be more powerful than CPU or GPU architectures. The industry points out that many of Google's current services are related to AI. It has invested in deep learning technology many years ago and found that using GPUs to perform AI calculations is very expensive. Therefore, Google decided to

After the debut of the NVIDIA H200, known as the world's most powerful AI chip, the industry began to look forward to NVIDIA's more powerful B100 chip. At the same time, OpenAI, the most popular AI start-up company this year, has begun to develop a more powerful and complex GPT-5 model. Guotai Junan pointed out in the latest research report that the B100 and GPT5 with boundless performance are expected to be released in 2024, and the major upgrades may release unprecedented productivity. The agency stated that it is optimistic that AI will enter a period of rapid development and its visibility will continue until 2024. Compared with previous generations of products, how powerful are B100 and GPT-5? Nvidia and OpenAI have already given a preview: B100 may be more than 4 times faster than H100, and GPT-5 may achieve super

KL730's progress in energy efficiency has solved the biggest bottleneck in the implementation of artificial intelligence models - energy costs. Compared with the industry and previous Nerner chips, the KL730 chip has increased by 3 to 4 times. The KL730 chip supports the most advanced lightweight GPT large Language models, such as nanoGPT, and provide effective computing power of 0.35-4 tera per second. AI company Kneron today announced the release of the KL730 chip, which integrates automotive-grade NPU and image signal processing (ISP) to bring safe and low-energy AI The capabilities are empowered in various application scenarios such as edge servers, smart homes, and automotive assisted driving systems. San Diego-based Kneron is known for its groundbreaking neural processing units (NPUs), and its latest chip, the KL730, aims to achieve

Google’s CEO likened the AI revolution to humanity’s use of fire, but now the digital fire that fuels the industry—AI chips—is hard to come by. The new generation of advanced chips that drive AI operations are almost all manufactured by NVIDIA. As ChatGPT explodes out of the circle, the market demand for NVIDIA graphics processing chips (GPUs) far exceeds the supply. "Because there is a shortage, the key is your circle of friends," said Sharon Zhou, co-founder and CEO of Lamini, a startup that helps companies build AI models such as chatbots. "It's like toilet paper during the epidemic." This kind of thing. The situation has limited the computing power that cloud service providers such as Amazon and Microsoft can provide to customers such as OpenAI, the creator of ChatGPT.

While the world is still obsessed with NVIDIA H100 chips and buying them crazily to meet the growing demand for AI computing power, on Monday local time, NVIDIA quietly launched its latest AI chip H200, which is used for training large AI models. Compared with other The performance of the previous generation products H100 and H200 has been improved by about 60% to 90%. The H200 is an upgraded version of the Nvidia H100. It is also based on the Hopper architecture like the H100. The main upgrade includes 141GB of HBM3e video memory, and the video memory bandwidth has increased from the H100's 3.35TB/s to 4.8TB/s. According to Nvidia’s official website, H200 is also the company’s first chip to use HBM3e memory. This memory is faster and has larger capacity, so it is more suitable for large languages.

Microsoft is developing AI-optimized chips to reduce the cost of training generative AI models, such as the ones that power the OpenAIChatGPT chatbot. The Information recently quoted two people familiar with the matter as saying that Microsoft has been developing a new chipset code-named "Athena" since at least 2019. Employees at Microsoft and OpenAI already have access to the new chips and are using them to test their performance on large language models such as GPT-4. Training large language models requires ingesting and analyzing large amounts of data in order to create new output content for the AI to imitate human conversation. This is a hallmark of generative AI models. This process requires a large number (on the order of tens of thousands) of A

South Korea's two major technology giants, Samsung Electronics and Naver Corporation, have agreed to jointly develop a generative artificial intelligence enterprise platform to compete with global artificial intelligence tools such as ChatGPT. In their artificial intelligence collaboration, South Korea's largest online search engine service provider Naver will obtain semiconductor-related data from Samsung to create a generative artificial intelligence, which will then be further upgraded by Samsung. Once developed, the Korean-language AI tool will be used by Samsung's Device Solutions (DS) unit, which includes its semiconductor business, people familiar with the matter said. The two partners aim to launch the AI tool as early as October. After field testing, Samsung plans to bring use of generative AI tools to enterprises
