Home Technology peripherals AI The black box has been opened! Transformer visual explanation tool that can be played, runs GPT-2 locally, and can also perform real-time reasoning

The black box has been opened! Transformer visual explanation tool that can be played, runs GPT-2 locally, and can also perform real-time reasoning

Aug 11, 2024 pm 04:03 PM
industry

It’s 2024, is there anyone who still doesn’t understand how Transformer works? Come and try this interactive tool.


In 2017, Google proposed Transformer in the paper "Attention is all you need", which became a major breakthrough in the field of deep learning. The number of citations of this paper has reached nearly 130,000. All subsequent models of the GPT family are also based on the Transformer architecture, which shows its wide influence.

As a neural network architecture, Transformer is widely popular in a variety of tasks from text to vision, especially in the currently hot field of AI chatbots.

The black box has been opened! Transformer visual explanation tool that can be played, runs GPT-2 locally, and can also perform real-time reasoning

However, for many non-professionals, the inner workings of Transformer are still opaque, hindering their understanding and participation. Therefore, it is particularly necessary to demystify this architecture. But many blogs, video tutorials, and 3D visualizations tend to emphasize mathematical complexity and model implementation, which can be confusing for beginners. Visualization efforts also designed for AI practitioners focus on neuronal and hierarchical interpretability and are challenging for non-experts.

Therefore, several researchers from Georgia Institute of Technology and IBM Research developed A web-based open source interactive visualization tool "Transformer Explainer" to help non-professionals understand the high-level model structure and low-level mathematics of Transformer Operation . As shown in Figure 1 below.

The black box has been opened! Transformer visual explanation tool that can be played, runs GPT-2 locally, and can also perform real-time reasoning

Transformer Explainer explains the inner workings of Transformer through text generation, using a Sankey diagram visualization design inspired by recent work on Transformer as a dynamic system, emphasizing how input data flows through model components. From the results, the Sankey diagram effectively illustrates how information is passed through the model and shows how the input is processed and transformed through Transformer operations.

In terms of content, Transformer Explainer tightly integrates a model overview that summarizes the Transformer structure and allows users to smoothly transition between multiple levels of abstraction to visualize the interaction between low-level mathematical operations and high-level model structure , to help them fully understand the complex concepts in Transformer.

Functionally, Transformer Explainer not only provides web-based implementation, but also has the function of real-time reasoning. Unlike many existing tools that require custom software installation or lack inference capabilities, it integrates a live GPT-2 model that runs natively in the browser using a modern front-end framework. Users can interactively experiment with their input text and observe in real time how Transformer's internal components and parameters work together to predict the next token.

Transformer Explainer expands access to modern generative AI technologies without requiring advanced computing resources, installation or programming skills. GPT-2 was chosen because the model is well-known, has fast inference speed, and is architecturally similar to more advanced models such as GPT-3 and GPT-4.

The black box has been opened! Transformer visual explanation tool that can be played, runs GPT-2 locally, and can also perform real-time reasoning

  • Paper address: https://arxiv.org/pdf/2408.04619
  • GitHub address: http://poloclub.github.io/transformer-explainer/
  • Online experience address: https:// t.co/jyBlJTMa7m
The black box has been opened! Transformer visual explanation tool that can be played, runs GPT-2 locally, and can also perform real-time reasoning
Since it supports your own input, this site also tried "what a beautiful day" and the results are shown in the figure below.

The black box has been opened! Transformer visual explanation tool that can be played, runs GPT-2 locally, and can also perform real-time reasoning

For Transformer Explainer, many netizens have given high praise. Some people say this is a very cool interactive tool.

The black box has been opened! Transformer visual explanation tool that can be played, runs GPT-2 locally, and can also perform real-time reasoning

Some people say that they have been waiting for an intuitive tool to explain self-attention and positional encoding, which is Transformer Explainer. It will be a game-changing tool.

The black box has been opened! Transformer visual explanation tool that can be played, runs GPT-2 locally, and can also perform real-time reasoning

Someone also made a Chinese translation.

The black box has been opened! Transformer visual explanation tool that can be played, runs GPT-2 locally, and can also perform real-time reasoning

                                                                                                                                                                             Display address: http://llm-viz-cn.iiiai.com/llm

I can’t help but think of Karpathy, another great person in the popular science world, who wrote before talked a lot about complex Tutorials on current GPT-2, including "Pure C language hand-made GPT-2, the new project of former OpenAI and Tesla executives is popular ", "Karpathy's latest four-hour video tutorial: Reproduce GPT-2 from scratch , run it overnight and it will be done" etc. Now that there is a visualization tool for Transformer's internal principles, it seems that the learning effect will be better when the two are used together.

Transformer Explainer system design and implementation

Transformer Explainer visually shows how the Transformer-based GPT-2 model is trained to process text input and predict the next token. The front-end uses Svelte and D3 to implement interactive visualization, and the back-end uses ONNX runtime and HuggingFace's Transformers library to run the GPT-2 model in the browser.

In the process of designing Transformer Explainer, a major challenge was how to manage the complexity of the underlying architecture, because showing all the details at the same time would distract from the point. To solve this problem, researchers paid great attention to two key design principles.

First, researchers reduce complexity through multi-level abstraction. They structure their tools to present information at different levels of abstraction. This avoids information overload by enabling users to start with a high-level overview and work their way down to details as needed. At the highest level, the tool shows the complete processing flow: from receiving user-supplied text as input (Figure 1A), embedding it, processing it through multiple Transformer blocks, and using the processed data to predict the most likely next A token prediction is sorted.

Intermediate operations, such as the calculation of the attention matrix (Figure 1C), which are collapsed by default to visually display the importance of the calculation results, the user can choose to expand and view its derivation process through an animation sequence . The researchers adopted a consistent visual language, such as stacking attention heads and collapsing repeated Transformer blocks, to help users identify repeating patterns in the architecture while maintaining an end-to-end flow of data.

Secondly, researchers enhance understanding and participation through interactivity. The temperature parameter is crucial in controlling the output probability distribution of the Transformer, which affects the certainty (at low temperatures) or randomness (at high temperatures) of the next token prediction. But existing educational resources on Transformers tend to ignore this aspect. Users are now able to use this new tool to adjust temperature parameters in real time (Figure 1B) and visualize their critical role in controlling prediction certainty (Figure 2).

The black box has been opened! Transformer visual explanation tool that can be played, runs GPT-2 locally, and can also perform real-time reasoning

또한 사용자는 제공된 예제 중에서 선택하거나 자신의 텍스트를 입력할 수 있습니다(그림 1A). 사용자 정의 입력 텍스트를 지원하면 사용자가 다양한 조건에서 모델의 동작을 분석하고 다양한 텍스트 입력을 기반으로 자체 가정을 대화형으로 테스트함으로써 사용자의 참여 감각이 향상됩니다.

그럼 실제 적용 시나리오는 무엇인가요?

루소 교수는 생성 AI의 최근 발전을 강조하기 위해 자연어 처리 과정의 과정 콘텐츠를 현대화하고 있습니다. 그녀는 일부 학생들이 Transformer 기반 모델을 파악하기 어려운 "마법"으로 보는 반면, 다른 학생들은 모델이 어떻게 작동하는지 이해하고 싶지만 어디서부터 시작해야 할지 확신하지 못한다는 사실을 알아냈습니다.

이 문제를 해결하기 위해 그녀는 학생들에게 Transformer에 대한 대화형 개요(그림 1)를 제공하고 학생들이 적극적으로 실험하고 학습하도록 장려하는 Transformer explainer를 사용하도록 안내했습니다. 그녀의 수업에는 300명이 넘는 학생이 있으며, 소프트웨어나 특수 하드웨어를 설치할 필요 없이 학생의 브라우저 내에서 완전히 실행할 수 있는 Transformer explainer의 능력은 중요한 이점이며 소프트웨어나 하드웨어 설정 관리에 대한 학생들의 걱정을 없애줍니다.

이 도구는 학생들에게 애니메이션 및 대화형 가역적 추상화를 통한 주의 계산과 같은 복잡한 수학적 연산을 소개합니다(그림 1C). 이 접근 방식은 학생들이 운영에 대한 높은 수준의 이해와 이러한 결과를 생성하는 기본 세부 사항에 대한 깊은 이해를 얻는 데 도움이 됩니다.

루소 교수는 또한 Transformer의 기술적 능력과 한계가 때때로 의인화된다는 점을 알고 있습니다(예: 온도 매개변수를 "창의력" 제어로 보는 것). 학생들에게 온도 슬라이더(그림 1B)를 실험하도록 장려함으로써 온도가 실제로 다음 토큰의 확률 분포를 수정하는 방법(그림 2)을 보여줌으로써 결정론적이고 보다 창의적인 방식으로 예측의 무작위성을 제어하여 두 토큰 사이의 균형을 유지합니다. 출력.

또한 시스템이 토큰 처리 프로세스를 시각화하면 학생들은 여기에 소위 "마법"이 없다는 것을 알 수 있습니다. 입력 텍스트가 무엇이든(그림 1A) 모델은 잘 따릅니다. 정의된 작업 순서는 Transformer 아키텍처를 사용하여 한 번에 하나의 토큰만 샘플링한 다음 프로세스를 반복합니다.

Future Work

연구원들은 학습 경험을 개선하기 위해 도구에 대한 대화형 설명을 강화하고 있습니다. 동시에 그들은 추론 속도를 높이기 위해 WebGPU를 사용하고 있으며 모델 크기를 줄이기 위해 압축 기술을 사용하고 있습니다. 또한 Transformer explainer의 효율성과 유용성을 평가하기 위한 사용자 연구를 수행하고, AI 초보자, 학생, 교육자 및 실무자가 도구를 사용하는 방법을 관찰하고 지원하고 싶은 추가 기능에 대한 피드백을 수집할 계획입니다.

무엇을 기다리고 계시나요? 이 앱을 사용해 보고 Transformer에 대한 "마법"의 환상을 깨고 그 뒤에 숨은 원리를 진정으로 이해해 보세요.

The above is the detailed content of The black box has been opened! Transformer visual explanation tool that can be played, runs GPT-2 locally, and can also perform real-time reasoning. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

Java Tutorial
1663
14
PHP Tutorial
1264
29
C# Tutorial
1237
24
DeepMind robot plays table tennis, and its forehand and backhand slip into the air, completely defeating human beginners DeepMind robot plays table tennis, and its forehand and backhand slip into the air, completely defeating human beginners Aug 09, 2024 pm 04:01 PM

But maybe he can’t defeat the old man in the park? The Paris Olympic Games are in full swing, and table tennis has attracted much attention. At the same time, robots have also made new breakthroughs in playing table tennis. Just now, DeepMind proposed the first learning robot agent that can reach the level of human amateur players in competitive table tennis. Paper address: https://arxiv.org/pdf/2408.03906 How good is the DeepMind robot at playing table tennis? Probably on par with human amateur players: both forehand and backhand: the opponent uses a variety of playing styles, and the robot can also withstand: receiving serves with different spins: However, the intensity of the game does not seem to be as intense as the old man in the park. For robots, table tennis

The first mechanical claw! Yuanluobao appeared at the 2024 World Robot Conference and released the first chess robot that can enter the home The first mechanical claw! Yuanluobao appeared at the 2024 World Robot Conference and released the first chess robot that can enter the home Aug 21, 2024 pm 07:33 PM

On August 21, the 2024 World Robot Conference was grandly held in Beijing. SenseTime's home robot brand "Yuanluobot SenseRobot" has unveiled its entire family of products, and recently released the Yuanluobot AI chess-playing robot - Chess Professional Edition (hereinafter referred to as "Yuanluobot SenseRobot"), becoming the world's first A chess robot for the home. As the third chess-playing robot product of Yuanluobo, the new Guoxiang robot has undergone a large number of special technical upgrades and innovations in AI and engineering machinery. For the first time, it has realized the ability to pick up three-dimensional chess pieces through mechanical claws on a home robot, and perform human-machine Functions such as chess playing, everyone playing chess, notation review, etc.

Claude has become lazy too! Netizen: Learn to give yourself a holiday Claude has become lazy too! Netizen: Learn to give yourself a holiday Sep 02, 2024 pm 01:56 PM

The start of school is about to begin, and it’s not just the students who are about to start the new semester who should take care of themselves, but also the large AI models. Some time ago, Reddit was filled with netizens complaining that Claude was getting lazy. "Its level has dropped a lot, it often pauses, and even the output becomes very short. In the first week of release, it could translate a full 4-page document at once, but now it can't even output half a page!" https:// www.reddit.com/r/ClaudeAI/comments/1by8rw8/something_just_feels_wrong_with_claude_in_the/ in a post titled "Totally disappointed with Claude", full of

At the World Robot Conference, this domestic robot carrying 'the hope of future elderly care' was surrounded At the World Robot Conference, this domestic robot carrying 'the hope of future elderly care' was surrounded Aug 22, 2024 pm 10:35 PM

At the World Robot Conference being held in Beijing, the display of humanoid robots has become the absolute focus of the scene. At the Stardust Intelligent booth, the AI ​​robot assistant S1 performed three major performances of dulcimer, martial arts, and calligraphy in one exhibition area, capable of both literary and martial arts. , attracted a large number of professional audiences and media. The elegant playing on the elastic strings allows the S1 to demonstrate fine operation and absolute control with speed, strength and precision. CCTV News conducted a special report on the imitation learning and intelligent control behind "Calligraphy". Company founder Lai Jie explained that behind the silky movements, the hardware side pursues the best force control and the most human-like body indicators (speed, load) etc.), but on the AI ​​side, the real movement data of people is collected, allowing the robot to become stronger when it encounters a strong situation and learn to evolve quickly. And agile

ACL 2024 Awards Announced: One of the Best Papers on Oracle Deciphering by HuaTech, GloVe Time Test Award ACL 2024 Awards Announced: One of the Best Papers on Oracle Deciphering by HuaTech, GloVe Time Test Award Aug 15, 2024 pm 04:37 PM

At this ACL conference, contributors have gained a lot. The six-day ACL2024 is being held in Bangkok, Thailand. ACL is the top international conference in the field of computational linguistics and natural language processing. It is organized by the International Association for Computational Linguistics and is held annually. ACL has always ranked first in academic influence in the field of NLP, and it is also a CCF-A recommended conference. This year's ACL conference is the 62nd and has received more than 400 cutting-edge works in the field of NLP. Yesterday afternoon, the conference announced the best paper and other awards. This time, there are 7 Best Paper Awards (two unpublished), 1 Best Theme Paper Award, and 35 Outstanding Paper Awards. The conference also awarded 3 Resource Paper Awards (ResourceAward) and Social Impact Award (

Li Feifei's team proposed ReKep to give robots spatial intelligence and integrate GPT-4o Li Feifei's team proposed ReKep to give robots spatial intelligence and integrate GPT-4o Sep 03, 2024 pm 05:18 PM

Deep integration of vision and robot learning. When two robot hands work together smoothly to fold clothes, pour tea, and pack shoes, coupled with the 1X humanoid robot NEO that has been making headlines recently, you may have a feeling: we seem to be entering the age of robots. In fact, these silky movements are the product of advanced robotic technology + exquisite frame design + multi-modal large models. We know that useful robots often require complex and exquisite interactions with the environment, and the environment can be represented as constraints in the spatial and temporal domains. For example, if you want a robot to pour tea, the robot first needs to grasp the handle of the teapot and keep it upright without spilling the tea, then move it smoothly until the mouth of the pot is aligned with the mouth of the cup, and then tilt the teapot at a certain angle. . this

Hongmeng Smart Travel S9 and full-scenario new product launch conference, a number of blockbuster new products were released together Hongmeng Smart Travel S9 and full-scenario new product launch conference, a number of blockbuster new products were released together Aug 08, 2024 am 07:02 AM

This afternoon, Hongmeng Zhixing officially welcomed new brands and new cars. On August 6, Huawei held the Hongmeng Smart Xingxing S9 and Huawei full-scenario new product launch conference, bringing the panoramic smart flagship sedan Xiangjie S9, the new M7Pro and Huawei novaFlip, MatePad Pro 12.2 inches, the new MatePad Air, Huawei Bisheng With many new all-scenario smart products including the laser printer X1 series, FreeBuds6i, WATCHFIT3 and smart screen S5Pro, from smart travel, smart office to smart wear, Huawei continues to build a full-scenario smart ecosystem to bring consumers a smart experience of the Internet of Everything. Hongmeng Zhixing: In-depth empowerment to promote the upgrading of the smart car industry Huawei joins hands with Chinese automotive industry partners to provide

Distributed Artificial Intelligence Conference DAI 2024 Call for Papers: Agent Day, Richard Sutton, the father of reinforcement learning, will attend! Yan Shuicheng, Sergey Levine and DeepMind scientists will give keynote speeches Distributed Artificial Intelligence Conference DAI 2024 Call for Papers: Agent Day, Richard Sutton, the father of reinforcement learning, will attend! Yan Shuicheng, Sergey Levine and DeepMind scientists will give keynote speeches Aug 22, 2024 pm 08:02 PM

Conference Introduction With the rapid development of science and technology, artificial intelligence has become an important force in promoting social progress. In this era, we are fortunate to witness and participate in the innovation and application of Distributed Artificial Intelligence (DAI). Distributed artificial intelligence is an important branch of the field of artificial intelligence, which has attracted more and more attention in recent years. Agents based on large language models (LLM) have suddenly emerged. By combining the powerful language understanding and generation capabilities of large models, they have shown great potential in natural language interaction, knowledge reasoning, task planning, etc. AIAgent is taking over the big language model and has become a hot topic in the current AI circle. Au

See all articles