Kneron Smart launches innovative edge AI chip KL730
Kneron Intelligence recently released a new edge computing AI chip called KL730, which is mainly used for smart driving and private GPT applications. This chip integrates a car-grade NPU and an image signal processor (ISP), and can provide safe and low-energy AI capabilities in various application scenarios such as edge servers, smart homes, and automotive assisted driving systems
KL730 is Kneron’s latest chip. It was originally designed to implement AI functions and has achieved energy-saving and safe technological innovations in many aspects. The chip has a multi-channel interface that can seamlessly access various digital signals, such as images, videos, audio and millimeter waves, and supports the development of artificial intelligence applications in a variety of industries
Kneron founder and CEO Liu Juncheng said that KL730 will be a pioneer in edge AI because it uses a new dedicated AI chip architecture, which is completely different from previous chips. Simply using adjacent technologies, such as graphics-specific GPU chips, won't do the job. KL730 has unprecedented efficiency and supports frameworks such as Transformer, so we can provide powerful AI capabilities to all walks of life while ensuring data security and privacy protection, and fully realizing the potential of artificial intelligence
It is understood that Kneron is a world-leading edge AI computing solution manufacturer, founded in 2015 and headquartered in San Diego, USA. They developed a highly energy-efficient and lightweight reconfigurable neural network architecture, which solved the three main problems faced by edge AI devices: latency, security and cost, thereby realizing ubiquitous AI. Kneron has received more than US$140 million in financing so far, and has received investment from Horizons Investment, Sequoia Capital, Qualcomm, Hon Hai, Lite-On Technology, Winbond Electronics, Macronix Electronics, ADATA Technology, Qianke Technology, etc. Support from investors
The above is the detailed content of Kneron Smart launches innovative edge AI chip KL730. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics











According to news on November 14, Nvidia officially released the new H200 GPU at the "Supercomputing23" conference on the morning of the 13th local time, and updated the GH200 product line. Among them, the H200 is still built on the existing Hopper H100 architecture. However, more high-bandwidth memory (HBM3e) has been added to better handle the large data sets required to develop and implement artificial intelligence, making the overall performance of running large models improved by 60% to 90% compared to the previous generation H100. The updated GH200 will also power the next generation of AI supercomputers. In 2024, more than 200 exaflops of AI computing power will be online. H200

On June 19, according to media reports in Taiwan, China, Google (Google) has approached MediaTek to cooperate in order to develop the latest server-oriented AI chip, and plans to hand it over to TSMC's 5nm process for foundry, with plans for mass production early next year. According to the report, sources revealed that this cooperation between Google and MediaTek will provide MediaTek with serializer and deserializer (SerDes) solutions and help integrate Google’s self-developed tensor processor (TPU) to help Google create the latest Server AI chips will be more powerful than CPU or GPU architectures. The industry points out that many of Google's current services are related to AI. It has invested in deep learning technology many years ago and found that using GPUs to perform AI calculations is very expensive. Therefore, Google decided to

After the debut of the NVIDIA H200, known as the world's most powerful AI chip, the industry began to look forward to NVIDIA's more powerful B100 chip. At the same time, OpenAI, the most popular AI start-up company this year, has begun to develop a more powerful and complex GPT-5 model. Guotai Junan pointed out in the latest research report that the B100 and GPT5 with boundless performance are expected to be released in 2024, and the major upgrades may release unprecedented productivity. The agency stated that it is optimistic that AI will enter a period of rapid development and its visibility will continue until 2024. Compared with previous generations of products, how powerful are B100 and GPT-5? Nvidia and OpenAI have already given a preview: B100 may be more than 4 times faster than H100, and GPT-5 may achieve super

KL730's progress in energy efficiency has solved the biggest bottleneck in the implementation of artificial intelligence models - energy costs. Compared with the industry and previous Nerner chips, the KL730 chip has increased by 3 to 4 times. The KL730 chip supports the most advanced lightweight GPT large Language models, such as nanoGPT, and provide effective computing power of 0.35-4 tera per second. AI company Kneron today announced the release of the KL730 chip, which integrates automotive-grade NPU and image signal processing (ISP) to bring safe and low-energy AI The capabilities are empowered in various application scenarios such as edge servers, smart homes, and automotive assisted driving systems. San Diego-based Kneron is known for its groundbreaking neural processing units (NPUs), and its latest chip, the KL730, aims to achieve

Recently, NVIDIA announced that it will showcase the NVIDIA Jetson edge computing platform suitable for autonomous machines and many other embedded applications at the 2023 Shanghai International Embedded Exhibition, and bring ecological partners based on related software and hardware in transportation, industry, robotics, etc. Solutions built across multiple industry verticals. Minsheng Securities pointed out in the research report that AIGC promotes the application revolution, artificial intelligence empowers thousands of industries, and the deployment of large models on the edge and mobile terminals has become a future trend. Edge AI is becoming the next high ground for technology giants to seize. The global potential market size in 2030 is US$445 billion. It is understood that edge computing is a distributed computing architecture that moves the computing of data, applications, and services from the network center node to the network.

Google’s CEO likened the AI revolution to humanity’s use of fire, but now the digital fire that fuels the industry—AI chips—is hard to come by. The new generation of advanced chips that drive AI operations are almost all manufactured by NVIDIA. As ChatGPT explodes out of the circle, the market demand for NVIDIA graphics processing chips (GPUs) far exceeds the supply. "Because there is a shortage, the key is your circle of friends," said Sharon Zhou, co-founder and CEO of Lamini, a startup that helps companies build AI models such as chatbots. "It's like toilet paper during the epidemic." This kind of thing. The situation has limited the computing power that cloud service providers such as Amazon and Microsoft can provide to customers such as OpenAI, the creator of ChatGPT.

While the world is still obsessed with NVIDIA H100 chips and buying them crazily to meet the growing demand for AI computing power, on Monday local time, NVIDIA quietly launched its latest AI chip H200, which is used for training large AI models. Compared with other The performance of the previous generation products H100 and H200 has been improved by about 60% to 90%. The H200 is an upgraded version of the Nvidia H100. It is also based on the Hopper architecture like the H100. The main upgrade includes 141GB of HBM3e video memory, and the video memory bandwidth has increased from the H100's 3.35TB/s to 4.8TB/s. According to Nvidia’s official website, H200 is also the company’s first chip to use HBM3e memory. This memory is faster and has larger capacity, so it is more suitable for large languages.

Microsoft is developing AI-optimized chips to reduce the cost of training generative AI models, such as the ones that power the OpenAIChatGPT chatbot. The Information recently quoted two people familiar with the matter as saying that Microsoft has been developing a new chipset code-named "Athena" since at least 2019. Employees at Microsoft and OpenAI already have access to the new chips and are using them to test their performance on large language models such as GPT-4. Training large language models requires ingesting and analyzing large amounts of data in order to create new output content for the AI to imitate human conversation. This is a hallmark of generative AI models. This process requires a large number (on the order of tens of thousands) of A
