Home Technology peripherals AI ECCV 2024 | To improve the performance of GPT-4V and Gemini detection tasks, you need this prompt paradigm

ECCV 2024 | To improve the performance of GPT-4V and Gemini detection tasks, you need this prompt paradigm

Jul 22, 2024 pm 05:28 PM
project

ECCV 2024 | 提升GPT-4V、Gemini检测任务性能,你需要这种提示范式
The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com

The authors of this article are from Zhejiang University, Shanghai Artificial Intelligence Laboratory, Chinese University of Hong Kong, University of Sydney and Oxford University. Author list: Wu Yixuan, Wang Yizhou, Tang Shixiang, Wu Wenhao, He Tong, Wanli Ouyang, Philip Torr, Jian Wu. Among them, the co-first author Wu Yixuan is a doctoral student at Zhejiang University, and Wang Yizhou is a scientific research assistant at the Shanghai Artificial Intelligence Laboratory. The corresponding author Tang Shixiang is a postdoctoral researcher at the Chinese University of Hong Kong.

Multimodal Large Language Models (MLLMs) have shown impressive capabilities in different tasks, despite this, the potential of these models in detection tasks is still underestimated. When precise coordinates are required in complex object detection tasks, the hallucinations of MLLMs often make them miss target objects or give inaccurate bounding boxes. In order to enable MLLMs for detection, existing work not only requires collecting large amounts of high-quality instruction data sets, but also fine-tuning open source models. While time-consuming and labor-intensive, it also fails to take advantage of the more powerful visual understanding capabilities of closed-source models. To this end, Zhejiang University, together with Shanghai Artificial Intelligence Laboratory and Oxford University, proposed DetToolChain, a new prompt paradigm that releases the detection capabilities of multi-modal large language models. Large multi-modal models can learn to detect accurately without training. Relevant research has been included in ECCV 2024.

In order to solve the problems of MLLM in detection tasks, DetToolChain starts from three points: (1) Design visual prompts for detection, which is more direct and effective for MLLM than traditional textual prompts. Understand position information, (2) break down detailed detection tasks into small and simple tasks, (3) use chain-of-thought to gradually optimize detection results, and avoid the illusion of large multi-modal models as much as possible.

Corresponding to the above insights, DetToolChain contains two key designs: (1) A comprehensive set of visual processing prompts (visual processing prompts), which are drawn directly in the image and can significantly narrow the gap between visual information and text information difference. (2) A comprehensive set of detection reasoning prompts to enhance the spatial understanding of the detection target and gradually determine the final precise target location through a sample-adaptive detection tool chain.

By combining DetToolChain with MLLM, such as GPT-4V and Gemini, various detection tasks can be supported without instruction tuning, including open vocabulary detection, description target detection, referential expression understanding and oriented target detection .

ECCV 2024 | 提升GPT-4V、Gemini检测任务性能,你需要这种提示范式

  • Paper title: DetToolChain: A New Prompting Paradigm to Unleash Detection Ability of MLLM
  • Paper link: https://arxiv.org/abs/2403.12488

What is DetToolChain?

ECCV 2024 | 提升GPT-4V、Gemini检测任务性能,你需要这种提示范式

                                                                                                                                                                                               Steps:

I. Formatting: Convert the original input format of the task into an appropriate instruction template as input to the MLLM;
II. Think: Break down a specific complex detection task into simpler subtasks and select effective tips from the detection tip toolkit ( prompts);
III. Execute: Iteratively execute specific prompts (prompts) in sequence;
IV. Respond: Use MLLM's own reasoning capabilities to supervise the entire detection process and return the final response (final answer).
Detection Prompt Toolkit: Visual Processing Prompts

Figure 2: Schematic diagram of visual processing prompts. We designed (1) Regional Amplifier, (2) Spatial Measurement Standard, (3) Scene Image Parser to improve the detection capabilities of MLLMs from different perspectives.

As shown in Figure 2, (1) Regional Amplifier aims to enhance the visibility of MLLMs on regions of interest (ROI), including cropping the original image into different sub-regions, focusing on the sub-regions where the target object is located. area; in addition, the zoom function enables fine-grained observation of specific sub-areas in the image.

(2) Spatial Measurement Standard provides a clearer reference for target detection by superimposing a ruler and compass with linear scales on the original image, as shown in Figure 2 (2). Auxiliary rulers and compasses enable MLLMs to output accurate coordinates and angles using translational and rotational references superimposed on the image. Essentially, this auxiliary line simplifies the detection task, allowing MLLMs to read the coordinates of objects instead of directly predicting them.

(3) Scene Image Parser marks the predicted object position or relationship, and uses spatial and contextual information to achieve spatial relationship understanding of the image. Scene Image Parser can be divided into two categories: First, for a single target object , we label the predicted object with centroid, convex hull and bounding box with label name and box index. These markers represent object position information in different formats, enabling MLLM to detect diverse objects of different shapes and backgrounds, especially objects with irregular shapes or heavy occlusions. For example, the convex hull marker marks the boundary points of an object and connects them into a convex hull to enhance the detection performance of very irregularly shaped objects. Secondly, for multi-objectives, we connect the centers of different objects through scene graph markers to highlight the relationship between objects in the image. Based on the scene graph, MLLM can leverage its contextual reasoning capabilities to optimize predicted bounding boxes and avoid hallucinations. For example, as shown in Figure 2 (3), Jerry wants to eat cheese, so their bounding boxes should be very close.

Detection Reasoning Prompts Toolkit: Detection Reasoning Prompts

ECCV 2024 | 提升GPT-4V、Gemini检测任务性能,你需要这种提示范式

To improve the reliability of the prediction box, we conducted detection reasoning prompts (shown in Table 1) to check the prediction results and diagnose possible potential problems. First, we propose Problem Insight Guider, which highlights difficult problems and provides effective detection suggestions and similar examples for query images. For example, for Figure 3, the Problem Insight Guider defines the query as a problem of small object detection and suggests solving it by zooming in on the surfboard area. Second, to leverage the inherent spatial and contextual capabilities of MLLMs, we design Spatial Relationship Explorer and Contextual Object Predictor to ensure that detection results are consistent with common sense. As shown in Figure 3, a surfboard may co-occur with the ocean (contextual knowledge), and there should be a surfboard near the surfer's feet (spatial knowledge). Furthermore, we apply Self-Verification Promoter to enhance the consistency of responses across multiple rounds. To further improve the reasoning capabilities of MLLMs, we adopt widely used prompting methods, such as debating and self-debugging. Please see the original text for a detailed description.

ECCV 2024 | 提升GPT-4V、Gemini检测任务性能,你需要这种提示范式

                                                                                                                      Detection inference hints can help MLLMs solve small object detection problems, for example, using common sense to locate a surfboard under a person’s feet, and encourage the model to detect surfboards in the ocean.

ECCV 2024 | 提升GPT-4V、Gemini检测任务性能,你需要这种提示范式

                                                                                                                                                                                                                                                                  Experiment: You can surpass the fine-tuning method without training

As shown in the table As shown in 2, we evaluated our method on open vocabulary detection (OVD), testing the AP50 results on 17 new classes, 48 ​​basic classes and all classes in the COCO OVD benchmark. The results show that the performance of both GPT-4V and Gemini is significantly improved using our DetToolChain.

ECCV 2024 | 提升GPT-4V、Gemini检测任务性能,你需要这种提示范式

Pour démontrer l'efficacité de notre méthode sur la compréhension des expressions référentielles, nous comparons notre méthode avec d'autres méthodes zero-shot sur les jeux de données RefCOCO, RefCOCO+ et RefCOCOg (Tableau 5). Sur RefCOCO, DetToolChain a amélioré les performances de la ligne de base GPT-4V de 44,53 %, 46,11 % et 24,85 % respectivement sur val, test-A et test-B, démontrant la compréhension et les performances supérieures de l'expression référentielle de DetToolChain dans des conditions de positionnement zéro.

The above is the detailed content of ECCV 2024 | To improve the performance of GPT-4V and Gemini detection tasks, you need this prompt paradigm. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Roblox: Bubble Gum Simulator Infinity - How To Get And Use Royal Keys
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Mandragora: Whispers Of The Witch Tree - How To Unlock The Grappling Hook
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Nordhold: Fusion System, Explained
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

Java Tutorial
1668
14
PHP Tutorial
1273
29
C# Tutorial
1256
24
The author of ControlNet has another hit! The whole process of generating a painting from a picture, earning 1.4k stars in two days The author of ControlNet has another hit! The whole process of generating a painting from a picture, earning 1.4k stars in two days Jul 17, 2024 am 01:56 AM

It is also a Tusheng video, but PaintsUndo has taken a different route. ControlNet author LvminZhang started to live again! This time I aim at the field of painting. The new project PaintsUndo has received 1.4kstar (still rising crazily) not long after it was launched. Project address: https://github.com/lllyasviel/Paints-UNDO Through this project, the user inputs a static image, and PaintsUndo can automatically help you generate a video of the entire painting process, from line draft to finished product. follow. During the drawing process, the line changes are amazing. The final video result is very similar to the original image: Let’s take a look at a complete drawing.

Topping the list of open source AI software engineers, UIUC's agent-less solution easily solves SWE-bench real programming problems Topping the list of open source AI software engineers, UIUC's agent-less solution easily solves SWE-bench real programming problems Jul 17, 2024 pm 10:02 PM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com The authors of this paper are all from the team of teacher Zhang Lingming at the University of Illinois at Urbana-Champaign (UIUC), including: Steven Code repair; Deng Yinlin, fourth-year doctoral student, researcher

From RLHF to DPO to TDPO, large model alignment algorithms are already 'token-level' From RLHF to DPO to TDPO, large model alignment algorithms are already 'token-level' Jun 24, 2024 pm 03:04 PM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com In the development process of artificial intelligence, the control and guidance of large language models (LLM) has always been one of the core challenges, aiming to ensure that these models are both powerful and safe serve human society. Early efforts focused on reinforcement learning methods through human feedback (RL

arXiv papers can be posted as 'barrage', Stanford alphaXiv discussion platform is online, LeCun likes it arXiv papers can be posted as 'barrage', Stanford alphaXiv discussion platform is online, LeCun likes it Aug 01, 2024 pm 05:18 PM

cheers! What is it like when a paper discussion is down to words? Recently, students at Stanford University created alphaXiv, an open discussion forum for arXiv papers that allows questions and comments to be posted directly on any arXiv paper. Website link: https://alphaxiv.org/ In fact, there is no need to visit this website specifically. Just change arXiv in any URL to alphaXiv to directly open the corresponding paper on the alphaXiv forum: you can accurately locate the paragraphs in the paper, Sentence: In the discussion area on the right, users can post questions to ask the author about the ideas and details of the paper. For example, they can also comment on the content of the paper, such as: "Given to

Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Jul 19, 2024 am 01:29 AM

If the answer given by the AI ​​model is incomprehensible at all, would you dare to use it? As machine learning systems are used in more important areas, it becomes increasingly important to demonstrate why we can trust their output, and when not to trust them. One possible way to gain trust in the output of a complex system is to require the system to produce an interpretation of its output that is readable to a human or another trusted system, that is, fully understandable to the point that any possible errors can be found. For example, to build trust in the judicial system, we require courts to provide clear and readable written opinions that explain and support their decisions. For large language models, we can also adopt a similar approach. However, when taking this approach, ensure that the language model generates

A significant breakthrough in the Riemann Hypothesis! Tao Zhexuan strongly recommends new papers from MIT and Oxford, and the 37-year-old Fields Medal winner participated A significant breakthrough in the Riemann Hypothesis! Tao Zhexuan strongly recommends new papers from MIT and Oxford, and the 37-year-old Fields Medal winner participated Aug 05, 2024 pm 03:32 PM

Recently, the Riemann Hypothesis, known as one of the seven major problems of the millennium, has achieved a new breakthrough. The Riemann Hypothesis is a very important unsolved problem in mathematics, related to the precise properties of the distribution of prime numbers (primes are those numbers that are only divisible by 1 and themselves, and they play a fundamental role in number theory). In today's mathematical literature, there are more than a thousand mathematical propositions based on the establishment of the Riemann Hypothesis (or its generalized form). In other words, once the Riemann Hypothesis and its generalized form are proven, these more than a thousand propositions will be established as theorems, which will have a profound impact on the field of mathematics; and if the Riemann Hypothesis is proven wrong, then among these propositions part of it will also lose its effectiveness. New breakthrough comes from MIT mathematics professor Larry Guth and Oxford University

LLM is really not good for time series prediction. It doesn't even use its reasoning ability. LLM is really not good for time series prediction. It doesn't even use its reasoning ability. Jul 15, 2024 pm 03:59 PM

Can language models really be used for time series prediction? According to Betteridge's Law of Headlines (any news headline ending with a question mark can be answered with "no"), the answer should be no. The fact seems to be true: such a powerful LLM cannot handle time series data well. Time series, that is, time series, as the name suggests, refers to a set of data point sequences arranged in the order of time. Time series analysis is critical in many areas, including disease spread prediction, retail analytics, healthcare, and finance. In the field of time series analysis, many researchers have recently been studying how to use large language models (LLM) to classify, predict, and detect anomalies in time series. These papers assume that language models that are good at handling sequential dependencies in text can also generalize to time series.

The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source Jul 17, 2024 am 02:46 AM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com. Introduction In recent years, the application of multimodal large language models (MLLM) in various fields has achieved remarkable success. However, as the basic model for many downstream tasks, current MLLM consists of the well-known Transformer network, which

See all articles