Table of Contents
#What does Yi-VL look like?
Beyond a series of large multi-modal models
Home Technology peripherals AI Kai-Fu Lee participated in Zero One Wish, which released a world-class open source multi-modal large model.

Kai-Fu Lee participated in Zero One Wish, which released a world-class open source multi-modal large model.

Jan 25, 2024 am 11:09 AM
Model train

Leading the two authoritative lists in Chinese and English, Kai-Fu Zero handed over the multi-modal large model answer sheet!

It is less than three months since the release of its first open source large models Yi-34B and Yi-6B.

Kai-Fu Lee participated in Zero One Wish, which released a world-class open source multi-modal large model.

The model is called Yi Vision Language (Yi-VL), and it is now officially open source to the world.

belong to the Yi series and also have two versions:

Yi-VL-34B and Yi-VL-6B.

Let’s take a look at two examples first to experience Yi-VL’s performance in diverse scenarios such as graphic and text dialogues:

Kai-Fu Lee participated in Zero One Wish, which released a world-class open source multi-modal large model.

Yi-VL Each picture was analyzed in detail, not only explaining the content on the sign, but even taking care of the "ceiling".

In Chinese, Yi-VL can also express clearly and methodically accurately:

Kai-Fu Lee participated in Zero One Wish, which released a world-class open source multi-modal large model.

In addition, the official test results were also given.

Yi-VL-34B has an accuracy of 41.6% on the English data set MMMU, second only to GPT-4V with an accuracy of 55.7%, surpassing a series of multi-modal large models.

On the Chinese data set CMMMU, the accuracy of Yi-VL-34B is 36.5%, which is ahead of the current cutting-edge open source multi-modal models.

Kai-Fu Lee participated in Zero One Wish, which released a world-class open source multi-modal large model.

#What does Yi-VL look like?

Yi-VL is developed based on the Yi language model. You can see the powerful text understanding capabilities based on the Yi language model. You only need to align the pictures to get a good multi-modal visual language model - this is also One of the core highlights of the Yi-VL model.

In terms of architecture design, the Yi-VL model is based on the open source LLaVA architecture and contains three main modules:

  • Vision Transformer (ViT for short) For image encoding, the open source OpenClip ViT-H/14 model is used to initialize the trainable parameters, and by learning to extract features from large-scale "image-text" pairs, the model has the ability to process and understand images.
  • The Projection module brings the ability to spatially align image features and text features to the model. This module consists of a multilayer perceptron (Multilayer Perceptron, referred to as MLP) that contains layer normalizations. This design allows the model to more effectively fuse and process visual and text information, improving the accuracy of multi-modal understanding and generation.
  • The introduction of Yi-34B-Chat and Yi-6B-Chat large language models provides Yi-VL with powerful language understanding and generation capabilities. This part of the model uses advanced natural language processing technology to help Yi-VL deeply understand complex language structures and generate coherent and relevant text output.
Kai-Fu Lee participated in Zero One Wish, which released a world-class open source multi-modal large model.
△Caption: Yi-VL model architecture design and training method process overview

On

training method, Yi -The training process of the VL model is divided into three stages, aiming to comprehensively improve the model's visual and language processing capabilities.

In the first stage, the ViT and Projection modules are trained using 100 million "image-text" paired data sets.

At this stage, the image resolution is set to 224x224 to enhance ViT’s knowledge acquisition capabilities in specific architectures while achieving efficient alignment with large language models.

In the second stage, the image resolution of ViT is increased to 448x448, making the model better at recognizing complex visual details. About 25 million "image-text" pairs are used in this stage.

In the third stage, the parameters of the entire model are opened for training, with the goal of improving the model's performance in multi-modal chat interaction. The training data covers diverse data sources, with a total of approximately 1 million "image-text" pairs, ensuring the breadth and balance of the data.

The zero-yiwu technical team also verified that it can quickly train efficient images based on the Yi language model's powerful language understanding and generation capabilities using other multi-modal training methods such as BLIP, Flamingo, EVA, etc. A multimodal graphic-text model for understanding and smoothing graphic-text dialogue.

Yi series models can be used as base language models for multi-modal models, providing a new option for the open source community. At the same time, the zero-one-things multi-modal team is exploring multi-modal pre-training from scratch to approach and surpass GPT-4V faster and reach the world's first echelon level.

Currently, the Yi-VL model has been opened to the public on platforms such as Hugging Face and ModelScope. Users can personally experience the performance of this model in diverse scenarios such as graphic and text dialogues.

Beyond a series of large multi-modal models

In the new multi-modal benchmark test MMMU, both versions Yi-VL-34B and Yi-VL-6B performed well.

MMMU (full name Massive Multi-discipline Multi-modal Understanding & Reasoning Massive Multi-discipline Multi-modal Understanding and Reasoning) The data set contains 11,500 subjects from six core disciplines(Art & Design, Business, Science, Health & Medicine, Humanities & Social Sciences, and Technology & Engineering) questions involving highly heterogeneous image types and intertwined textual image information pose challenges to the model's advanced perception and reasoning capabilities met extremely high demands.

Kai-Fu Lee participated in Zero One Wish, which released a world-class open source multi-modal large model.

Yi-VL-34B successfully surpassed a series of multi-modal large models with an accuracy of 41.6% on this test set, second only to GPT-4V (55.7%), showing strong ability to understand and apply interdisciplinary knowledge.

Kai-Fu Lee participated in Zero One Wish, which released a world-class open source multi-modal large model.

Similarly, on the CMMMU data set created for the Chinese scene, the Yi-VL model shows the unique advantage of "understanding Chinese people better".

CMMMU contains about 12,000 Chinese multi-modal questions derived from university exams, tests and textbooks.

Kai-Fu Lee participated in Zero One Wish, which released a world-class open source multi-modal large model.

Among them, GPT-4V has an accuracy of 43.7% on this test set, followed by Yi-VL-34B with an accuracy of 36.5%, leading the The current cutting-edge open source multimodal model.

Kai-Fu Lee participated in Zero One Wish, which released a world-class open source multi-modal large model.

Project address:
[1]https://huggingface.co/01-ai

[2] https://www.modelscope.cn/organization/01ai

The above is the detailed content of Kai-Fu Lee participated in Zero One Wish, which released a world-class open source multi-modal large model.. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Open source! Beyond ZoeDepth! DepthFM: Fast and accurate monocular depth estimation! Open source! Beyond ZoeDepth! DepthFM: Fast and accurate monocular depth estimation! Apr 03, 2024 pm 12:04 PM

0.What does this article do? We propose DepthFM: a versatile and fast state-of-the-art generative monocular depth estimation model. In addition to traditional depth estimation tasks, DepthFM also demonstrates state-of-the-art capabilities in downstream tasks such as depth inpainting. DepthFM is efficient and can synthesize depth maps within a few inference steps. Let’s read about this work together ~ 1. Paper information title: DepthFM: FastMonocularDepthEstimationwithFlowMatching Author: MingGui, JohannesS.Fischer, UlrichPrestel, PingchuanMa, Dmytr

The world's most powerful open source MoE model is here, with Chinese capabilities comparable to GPT-4, and the price is only nearly one percent of GPT-4-Turbo The world's most powerful open source MoE model is here, with Chinese capabilities comparable to GPT-4, and the price is only nearly one percent of GPT-4-Turbo May 07, 2024 pm 04:13 PM

Imagine an artificial intelligence model that not only has the ability to surpass traditional computing, but also achieves more efficient performance at a lower cost. This is not science fiction, DeepSeek-V2[1], the world’s most powerful open source MoE model is here. DeepSeek-V2 is a powerful mixture of experts (MoE) language model with the characteristics of economical training and efficient inference. It consists of 236B parameters, 21B of which are used to activate each marker. Compared with DeepSeek67B, DeepSeek-V2 has stronger performance, while saving 42.5% of training costs, reducing KV cache by 93.3%, and increasing the maximum generation throughput to 5.76 times. DeepSeek is a company exploring general artificial intelligence

AI subverts mathematical research! Fields Medal winner and Chinese-American mathematician led 11 top-ranked papers | Liked by Terence Tao AI subverts mathematical research! Fields Medal winner and Chinese-American mathematician led 11 top-ranked papers | Liked by Terence Tao Apr 09, 2024 am 11:52 AM

AI is indeed changing mathematics. Recently, Tao Zhexuan, who has been paying close attention to this issue, forwarded the latest issue of "Bulletin of the American Mathematical Society" (Bulletin of the American Mathematical Society). Focusing on the topic "Will machines change mathematics?", many mathematicians expressed their opinions. The whole process was full of sparks, hardcore and exciting. The author has a strong lineup, including Fields Medal winner Akshay Venkatesh, Chinese mathematician Zheng Lejun, NYU computer scientist Ernest Davis and many other well-known scholars in the industry. The world of AI has changed dramatically. You know, many of these articles were submitted a year ago.

Hello, electric Atlas! Boston Dynamics robot comes back to life, 180-degree weird moves scare Musk Hello, electric Atlas! Boston Dynamics robot comes back to life, 180-degree weird moves scare Musk Apr 18, 2024 pm 07:58 PM

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

KAN, which replaces MLP, has been extended to convolution by open source projects KAN, which replaces MLP, has been extended to convolution by open source projects Jun 01, 2024 pm 10:03 PM

Earlier this month, researchers from MIT and other institutions proposed a very promising alternative to MLP - KAN. KAN outperforms MLP in terms of accuracy and interpretability. And it can outperform MLP running with a larger number of parameters with a very small number of parameters. For example, the authors stated that they used KAN to reproduce DeepMind's results with a smaller network and a higher degree of automation. Specifically, DeepMind's MLP has about 300,000 parameters, while KAN only has about 200 parameters. KAN has a strong mathematical foundation like MLP. MLP is based on the universal approximation theorem, while KAN is based on the Kolmogorov-Arnold representation theorem. As shown in the figure below, KAN has

Tesla robots work in factories, Musk: The degree of freedom of hands will reach 22 this year! Tesla robots work in factories, Musk: The degree of freedom of hands will reach 22 this year! May 06, 2024 pm 04:13 PM

The latest video of Tesla's robot Optimus is released, and it can already work in the factory. At normal speed, it sorts batteries (Tesla's 4680 batteries) like this: The official also released what it looks like at 20x speed - on a small "workstation", picking and picking and picking: This time it is released One of the highlights of the video is that Optimus completes this work in the factory, completely autonomously, without human intervention throughout the process. And from the perspective of Optimus, it can also pick up and place the crooked battery, focusing on automatic error correction: Regarding Optimus's hand, NVIDIA scientist Jim Fan gave a high evaluation: Optimus's hand is the world's five-fingered robot. One of the most dexterous. Its hands are not only tactile

The vitality of super intelligence awakens! But with the arrival of self-updating AI, mothers no longer have to worry about data bottlenecks The vitality of super intelligence awakens! But with the arrival of self-updating AI, mothers no longer have to worry about data bottlenecks Apr 29, 2024 pm 06:55 PM

I cry to death. The world is madly building big models. The data on the Internet is not enough. It is not enough at all. The training model looks like "The Hunger Games", and AI researchers around the world are worrying about how to feed these data voracious eaters. This problem is particularly prominent in multi-modal tasks. At a time when nothing could be done, a start-up team from the Department of Renmin University of China used its own new model to become the first in China to make "model-generated data feed itself" a reality. Moreover, it is a two-pronged approach on the understanding side and the generation side. Both sides can generate high-quality, multi-modal new data and provide data feedback to the model itself. What is a model? Awaker 1.0, a large multi-modal model that just appeared on the Zhongguancun Forum. Who is the team? Sophon engine. Founded by Gao Yizhao, a doctoral student at Renmin University’s Hillhouse School of Artificial Intelligence.

FisheyeDetNet: the first target detection algorithm based on fisheye camera FisheyeDetNet: the first target detection algorithm based on fisheye camera Apr 26, 2024 am 11:37 AM

Target detection is a relatively mature problem in autonomous driving systems, among which pedestrian detection is one of the earliest algorithms to be deployed. Very comprehensive research has been carried out in most papers. However, distance perception using fisheye cameras for surround view is relatively less studied. Due to large radial distortion, standard bounding box representation is difficult to implement in fisheye cameras. To alleviate the above description, we explore extended bounding box, ellipse, and general polygon designs into polar/angular representations and define an instance segmentation mIOU metric to analyze these representations. The proposed model fisheyeDetNet with polygonal shape outperforms other models and simultaneously achieves 49.5% mAP on the Valeo fisheye camera dataset for autonomous driving

See all articles