Home Technology peripherals AI Will AI require new data center infrastructure?

Will AI require new data center infrastructure?

Apr 12, 2023 am 11:28 AM
AI cloud computing data center

The data center infrastructure that has been expanded in recent years to support the explosive growth of cloud computing, video streaming and 5G networks will not be sufficient to support the next level of digital transformation that will officially begin with the widespread adoption of artificial intelligence. .

Will AI require new data center infrastructure?

In fact, digital infrastructure for artificial intelligence will require a different cloud computing framework, which will redefine current data center networks, including some data center clusters. location and specific functions of these facilities.

In November, Amazon Web Services, the global leader in cloud computing, formed a partnership with StabilityAI, and Google reportedly has a chatgtp-type system called lambda. The search engine giant has asked the founder to pull Page and Sergey Brin to guide its release.

Last month, Meta announced that it would suspend the expansion of data centers around the world and reconfigure these server farms to meet the data processing needs of artificial intelligence.

The demand for data processing in artificial intelligence platforms is huge. The OpenAI creators of ChatGPT launched the platform in November last year. It will not be able to hitch a ride on Microsoft’s upcoming upgrade of the Azure cloud platform. Keep running.

ChatGPT may explain this better, but it turns out that the microprocessing “brain” of the AI ​​platform—in this case, the data center infrastructure that supports this digital transformation—will be like Like the human brain, it is organized into two hemispheres or lobes. Yes, one leaf needs to be much stronger than the other.

One hemisphere of artificial intelligence digital infrastructure will serve so-called “training,” processing the computing power required to process up to 300B data points to create the word salad that ChatGPT generates. In ChatGPT, that's every pixel on the internet since Al Gore invented it.

The training leaf takes in data points and reorganizes them in the model, just like the brain's synapses. It is an iterative process in which the digital entity continues to refine its "understanding," essentially teaching itself to absorb a world of information and convey the essence of that knowledge in precise human grammar.

Training lobes requires powerful computing power and state-of-the-art GPU semiconductors, but little connectivity is currently required in data center clusters supporting cloud computing services and 5G networks.

Focusing on “training” the infrastructure for each AI platform will create huge demands for power, requiring data centers to be located near gigawatts of renewable energy, new liquid cooling systems to be installed, and new designed backup power and generator systems, as well as other new design features.

Artificial Intelligence Platforms The other hemisphere of the brain, a higher-functioning digital infrastructure known as “inference” mode supports interactive “generative” platforms that generate questions or instructions seconds after you input them. Within minutes, the query is processed into the modeled database and responds to you with convincing human syntax.

Today’s hyperconnected data center networks, such as North America’s largest data center cluster, Northern Virginia’s “Data Center Alley,” which also has the nation’s most extensive fiber optic network, can accommodate artificial intelligence brain “inference” leaves next-level connectivity needs, but these facilities will also need to be upgraded to meet the massive processing capacity required, and they will need to be closer to the substations.

The largest cloud computing providers are offering data processing power to artificial intelligence startups hungry for data processing power because these startups have the potential to become long-term customers. One VC investing in AI likened it to a “proxy war” between superpowers competing for AI supremacy.

There is a proxy war going on among the big cloud computing companies. They're really the only ones capable of building really big AI platforms with lots of parameters.

Emerging artificial intelligence chatbots are "terribly good", but they are not very sentient beings and cannot match millions of years of evolution that have produced billions of precise Sequences of synapses that fire within the same millisecond in the frontal lobe.

The above is the detailed content of Will AI require new data center infrastructure?. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Bytedance Cutting launches SVIP super membership: 499 yuan for continuous annual subscription, providing a variety of AI functions Bytedance Cutting launches SVIP super membership: 499 yuan for continuous annual subscription, providing a variety of AI functions Jun 28, 2024 am 03:51 AM

This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Context-augmented AI coding assistant using Rag and Sem-Rag Context-augmented AI coding assistant using Rag and Sem-Rag Jun 10, 2024 am 11:08 AM

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

Seven Cool GenAI & LLM Technical Interview Questions Seven Cool GenAI & LLM Technical Interview Questions Jun 07, 2024 am 10:06 AM

To learn more about AIGC, please visit: 51CTOAI.x Community https://www.51cto.com/aigc/Translator|Jingyan Reviewer|Chonglou is different from the traditional question bank that can be seen everywhere on the Internet. These questions It requires thinking outside the box. Large Language Models (LLMs) are increasingly important in the fields of data science, generative artificial intelligence (GenAI), and artificial intelligence. These complex algorithms enhance human skills and drive efficiency and innovation in many industries, becoming the key for companies to remain competitive. LLM has a wide range of applications. It can be used in fields such as natural language processing, text generation, speech recognition and recommendation systems. By learning from large amounts of data, LLM is able to generate text

Can fine-tuning really allow LLM to learn new things: introducing new knowledge may make the model produce more hallucinations Can fine-tuning really allow LLM to learn new things: introducing new knowledge may make the model produce more hallucinations Jun 11, 2024 pm 03:57 PM

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

Cloud computing giant launches legal battle: Amazon sues Nokia for patent infringement Cloud computing giant launches legal battle: Amazon sues Nokia for patent infringement Jul 31, 2024 pm 12:47 PM

According to news from this site on July 31, technology giant Amazon sued Finnish telecommunications company Nokia in the federal court of Delaware on Tuesday, accusing it of infringing on more than a dozen Amazon patents related to cloud computing technology. 1. Amazon stated in the lawsuit that Nokia abused Amazon Cloud Computing Service (AWS) related technologies, including cloud computing infrastructure, security and performance technologies, to enhance its own cloud service products. Amazon launched AWS in 2006 and its groundbreaking cloud computing technology had been developed since the early 2000s, the complaint said. "Amazon is a pioneer in cloud computing, and now Nokia is using Amazon's patented cloud computing innovations without permission," the complaint reads. Amazon asks court for injunction to block

Samsung introduces BM1743 data center-grade SSD: equipped with v7 QLC V-NAND and supports PCIe 5.0 Samsung introduces BM1743 data center-grade SSD: equipped with v7 QLC V-NAND and supports PCIe 5.0 Jun 18, 2024 pm 04:15 PM

According to news from this website on June 18, Samsung Semiconductor recently introduced its next-generation data center-grade solid-state drive BM1743 equipped with its latest QLC flash memory (v7) on its technology blog. ▲Samsung QLC data center-grade solid-state drive BM1743 According to TrendForce in April, in the field of QLC data center-grade solid-state drives, only Samsung and Solidigm, a subsidiary of SK Hynix, had passed the enterprise customer verification at that time. Compared with the previous generation v5QLCV-NAND (note on this site: Samsung v6V-NAND does not have QLC products), Samsung v7QLCV-NAND flash memory has almost doubled the number of stacking layers, and the storage density has also been greatly improved. At the same time, the smoothness of v7QLCV-NAND

To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework Jul 25, 2024 am 06:42 AM

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

SOTA performance, Xiamen multi-modal protein-ligand affinity prediction AI method, combines molecular surface information for the first time SOTA performance, Xiamen multi-modal protein-ligand affinity prediction AI method, combines molecular surface information for the first time Jul 17, 2024 pm 06:37 PM

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

See all articles