Home Technology peripherals AI Amazon Cloud Innovation 'Neural Sparse Retrieval”: Only text matching is needed to achieve semantic search

Amazon Cloud Innovation 'Neural Sparse Retrieval”: Only text matching is needed to achieve semantic search

Jul 02, 2024 am 02:55 AM
project

Amazon Cloud Innovation Neural Sparse Retrieval”: Only text matching is needed to achieve semantic search
The AIxiv column is a column where academic and technical content is published on this site. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com

The authors of this article are Dr. Yang Yang, machine learning leader and machine learning engineers Geng Zhichao and Guan Cong from the OpenSearch China R&D team. OpenSearch is a pure open source search and real-time analysis engine project initiated by Amazon Cloud Technology. The software currently has over 500 million downloads, and the community has more than 70 corporate partners around the world.

Since the explosion of large models, semantic retrieval has gradually become a popular technology. Especially in RAG (retrieval augmented generation) applications, the relevance of the retrieval results directly determines the final effect of AI generation.

Most of the semantic retrieval implementation solutions currently on the market use a language model to encode a string of text into a high-dimensional vector, and use approximate k-neighbor search (k-NN). Retrieve. Many people are deterred by the high cost of VectorDB and language model deployment (which requires GPUs).

Recently, Amazon OpenSearch, together with Amazon Shanghai Artificial Intelligence Research Institute, launched the Neural Sparse function in the OpenSearch NeuralSearch plug-in, which solves the following three challenges currently faced by semantic retrieval:

  • The stability of correlation performance on different queries: Zero-shot semantic retrieval requires the semantic coding model to have good correlation performance on data sets with different backgrounds, that is, the language model is required to be used out of the box, without the user having to Fine-tune on the data set. Taking advantage of the homologous characteristics of sparse coding and term vectors, Neural Sparse can downgrade to text matching when encountering unfamiliar text expressions (industry-specific words, abbreviations, etc.), thereby avoiding outrageous search results.
  • Time efficiency of online search: The significance of low latency for real-time search applications is obvious. Currently popular semantic retrieval methods generally include two processes: semantic encoding and indexing. The speed of these two determines the end-to-end retrieval efficiency of a retrieval application. Neural Sparse's unique doc-only mode can achieve semantic retrieval accuracy comparable to first-class language models at a latency similar to text matching without online coding.
  • Index storage resource consumption: Commercial retrieval applications are very sensitive to storage resource consumption. When indexing massive amounts of data, the running cost of a search engine is strongly related to the consumption of storage resources. In related experiments, Neural Sparse only required 1/10 of k-NN indexing to index the same size of data. At the same time, the memory consumption is also much smaller than k-NN index.

Amazon Cloud Innovation Neural Sparse Retrieval”: Only text matching is needed to achieve semantic search                                                                                                                                                                                                                                                      

  • Documentation homepage: https://opensearch.org/docs/latest/search-plugins/neural-sparse-search/
  • Project Github address: https://github.com/opensearch-project/neural- search

Technical highlights

Sparse encoding combined with native Lucene index

The main method of current semantic retrieval comes from dense encoding (Dense Encoding), the document to be retrieved And the query text will be converted into a vector in a high-dimensional space by the language encoding model. For example, the TASB model in Sentence-BERT will generate a 768-dimensional vector, and All-MiniLM-L6 will convert text into a 384-dimensional vector. The indexing of this type of high-dimensional vector requires the use of special k-NN search engines, such as the earliest tree-structure-based FLANN, hash-based LSH, and later HNSW based on neighbor graphs and skip tables, as well as the latest quantization-based FAISS engine.

Sparse Encoding converts text into a set of tokens and weights. The token here is the text unit generated after the language coding model uses a segmenter to cut the text. For example, using the WordPiece splitter, tokens can be understood as "words" to a certain extent, but there may also be situations where a word is too long and is split into two tokens.

Amazon Cloud Innovation Neural Sparse Retrieval”: Only text matching is needed to achieve semantic search                                                                                                                                                                Comparison between sparse encoding and dense encoding

Since the token-weight combination generated by sparse encoding is very similar to the term-vector used in traditional text matching methods, the native Lucene index can be used in OpenSearch To store documents sparsely encoded. Compared with the k-NN search engine, the native Luence engine is lighter and takes up less resources.

The following table shows the comparison of disk consumption and runtime memory (runtime RAM) consumption of using Lucene for text matching, using k-NN engine to store dense encoding, and using Lucene to store sparse encoding.

Amazon Cloud Innovation Neural Sparse Retrieval”: Only text matching is needed to achieve semantic search                                                                     to                         to   to             to                                                                                                             she herself herself herself herself she herself herself she she herself she Shen Shen Shen she Shen Shen Shen her all takes she for According to the BEIR article And, since most of the current dense coding models are based on fine-tuning on the MSMAARCO data set, the model performs very well on this data set. However, when conducting zero-shot tests on other BEIR data sets, the correlation of the dense coding model cannot exceed BM25 on about 60% to 70% of the data sets. This can also be seen from our own replicated comparative experiments (see table below).

                                                                                                                                                                                                                                 Comparison of the correlation performance of several methods on some data sets

We found in experiments that sparse coding performs better than dense coding on unfamiliar data sets. Although there is currently no more detailed quantitative data to confirm it, according to the analysis of some samples, its advantages mainly lie in two points: 1) sparse coding is more prominent in the association of synonyms, 2) when encountering completely unfamiliar text expressions For example, for some professional terms, sparse coding will tend to enhance the weight of these term tokens and weaken the weight of associated tokens, causing the retrieval process to degenerate to keyword matching and pursue a stable correlation performance.

Amazon Cloud Innovation Neural Sparse Retrieval”: Only text matching is needed to achieve semantic searchIn experiments on the BEIR benchmark, we can see that the two methods of Neural Sparse have higher correlation scores compared to the dense coding model and BM25.

Extreme speed: document encoding mode only

Neural Search also provides a mode that provides the ultimate online retrieval speed. In this mode, only documents to be retrieved are sparsely encoded. In contrast, during online retrieval, the query text does not invoke the language encoding model for encoding. Instead, only use the tokenizer to split the query text. Since the call process of the deep learning model is omitted, it not only greatly reduces the delay of online retrieval, but also saves a large amount of computing resources required for model inference, such as GPU computing power.

The following table compares the text matching retrieval method BM25, dense encoding retrieval BERT-TASB model, sparse encoding retrieval with query encoding bi-encoder method, and sparse encoding retrieval only document encoding doc-only in MSMAARCO v2 1 million volumes Speed ​​comparison on level data sets. We can clearly see that the document-only encoding mode has a similar speed performance to BM25, and from the table in the previous section, we can see that the correlation performance of the document-only encoding mode is not worse than the query sparse encoding method. too much. It can be said that the document-only encoding mode is a very cost-effective choice.

Amazon Cloud Innovation Neural Sparse Retrieval”: Only text matching is needed to achieve semantic search

Even faster: use two-stage search for acceleration

As mentioned in the previous article, during the sparse encoding process, the text is converted into a set of tokens and weights. This transformation produces a large number of tokens with low weights. Although these tokens take up most of the time in the search process, their contribution to the final search results is not significant.

Therefore, we propose a new search strategy that first filters out these low-weight tokens in the first search and relies only on high-weight tokens to locate higher-ranking documents. Then on these selected documents, the previously filtered low-weight tokens are reintroduced for a second detailed scoring to obtain the final score.

Through this method, we significantly reduce the delay in two parts: First, in the first stage of search, only high-weight tokens are matched in the inverted index, greatly reducing unnecessary calculations time. Secondly, when scoring again within a precise small range of result documents, we only calculate the scores of low-weight tokens for potentially relevant documents, further optimizing the processing time.

In the end, this improved method achieved a latency performance close to that of BM25 search in the document encoding mode (doc-only), and was 5 times faster in the query encoding mode (bi-encoder) to 8 times, greatly improving the latency performance and throughput of Neural Search
. The following is a delay comparison of the standard Neural Sparse, two -stage Neural Spars, BM25 on the four typical Beir datasets:

Two -stage search speed comparison Amazon Cloud Innovation Neural Sparse Retrieval”: Only text matching is needed to achieve semantic search

5 steps to build Neural Spars in OpenSearch in OpenSearch Semantic retrieval application

1. Set up and enable Neural Search

First set the cluster configuration so that the model can run on the local cluster.
PUT /_cluster/settings{"transient" : {"plugins.ml_commons.allow_registering_model_via_url" : true,"plugins.ml_commons.only_run_on_ml_node" : false,"plugins.ml_commons.native_memory_threshold" : 99}}
Copy after login
2. Deploy the encoder

Opensearch currently has 3 models open source. Relevant registration information can be obtained in official documents. Let's take amazon/neural-sparse/opensearch-neural-sparse-encoding-v1 as an example. First use the register API to register:

POST /_plugins/_ml/models/_register?deploy=true{    "name": "amazon/neural-sparse/opensearch-neural-sparse-encoding-v1",    "version": "1.0.1",    "model_format": "TORCH_SCRIPT"}
Copy after login

In the return of the cluster, you can see the task_id
{"task_id": "<task_id>","status": "CREATED"}
Copy after login
Use task_id to Get detailed registration information:

GET /_plugins/_ml/tasks/
Copy after login

In the API return, we can get the specific model_id:

{"model_id": "<model_id>","task_type": "REGISTER_MODEL","function_name": "SPARSE_TOKENIZE","state": "COMPLETED","worker_node": ["wubXZX7xTIC7RW2z8nzhzw"],    "create_time":1701390988405,"last_update_time": 1701390993724,"is_async": true}
Copy after login


3. Set up the preprocessing pipeline
Before indexing, each document needs The encoded text fields need to be converted into sparse vectors. In OpenSearch, this process is automated through the preprocessor. You can use the following API to create a processor pipeline for offline indexing:

PUT /_ingest/pipeline/neural-sparse-pipeline{  "description": "An example neural sparse encoding pipeline",  "processors" : [    {      "sparse_encoding": {        "model_id": "<model_id>",        "field_map": {           "passage_text": "passage_embedding"        }      }    }  ]}
Copy after login

If you need to enable the two-stage acceleration function (not required), you need to create a two-stage search pipeline and set it as the default after the index is created search pipeline.

The method of establishing a two-stage accelerated search pipeline with default parameters is as follows. For more detailed parameter settings and meanings, please refer to the official OpenSearch documentation of 2.15 and later versions.

PUT /_search/pipeline/two_phase_search_pipeline{  "request_processors": [    {      "neural_sparse_two_phase_processor": {        "tag": "neural-sparse",        "description": "This processor is making two-phase processor."      }    }  ]}
Copy after login

4. Set index

神经稀疏搜索利用 rank_features 字段类型来存储编码得到的词元和相对应的权重。索引将使用上述预处理器来编码文本。我们可以按以下方式创建索一个包含两阶段搜索加速管线的索引(如果不想开启此功能,可把 `two_phase_search_pipeline` 替换为 `_none` 或删除 `settings.search` 这一配置单元)。

PUT /my-neural-sparse-index{  "settings": {    "ingest":{        "default_pipeline":"neural-sparse-pipeline"    },    "search":{        "default_pipeline":"two_phase_search_pipeline"    }  },  "mappings": {    "properties": {      "passage_embedding": {        "type": "rank_features"      },      "passage_text": {        "type": "text"      }    }  }}
Copy after login

5. 使用预处理器导入文档并搜索

在设置索引之后,客户可以提交文档。客户提供文本字段,而摄取进程将自动将文本内容转换为稀疏向量,并根据预处理器中的字段映射 field_map 将其放入 rank_features 字段:
PUT /my-neural-sparse-index/_doc/{   "passage_text": "Hello world"}
Copy after login

在索引中进行稀疏语义搜索的接口如下,将 替换为第二步中注册的 model_id:

GET my-neural-sparse-index/_search{  "query":{    "neural_sparse":{      "passage_embedding":{        "query_text": "Hi world",        "model_id": <model_id>      }    }  }}
Copy after login

关于 OpenSearch

OpenSearch 是一种分布式、由社区驱动并取得 Apache 2.0 许可的 100% 开源搜索和分析套件,可用于一组广泛的使用案例,如实时应用程序监控、日志分析和网站搜索。OpenSearch 提供了一个高度可扩展的系统,通过集成的可视化工具 OpenSearch 控制面板为大量数据提供快速访问和响应,使用户可以轻松地探索他们的数据。

OpenSearch 由 Apache Lucene 搜索库提供技术支持,它支持一系列搜索及分析功能,如 k - 最近邻(KNN)搜索、SQL、异常检测、Machine Learning Commons、Trace Analytics、全文搜索等。

The above is the detailed content of Amazon Cloud Innovation 'Neural Sparse Retrieval”: Only text matching is needed to achieve semantic search. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Roblox: Bubble Gum Simulator Infinity - How To Get And Use Royal Keys
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Nordhold: Fusion System, Explained
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Mandragora: Whispers Of The Witch Tree - How To Unlock The Grappling Hook
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

Java Tutorial
1667
14
PHP Tutorial
1273
29
C# Tutorial
1255
24
The author of ControlNet has another hit! The whole process of generating a painting from a picture, earning 1.4k stars in two days The author of ControlNet has another hit! The whole process of generating a painting from a picture, earning 1.4k stars in two days Jul 17, 2024 am 01:56 AM

It is also a Tusheng video, but PaintsUndo has taken a different route. ControlNet author LvminZhang started to live again! This time I aim at the field of painting. The new project PaintsUndo has received 1.4kstar (still rising crazily) not long after it was launched. Project address: https://github.com/lllyasviel/Paints-UNDO Through this project, the user inputs a static image, and PaintsUndo can automatically help you generate a video of the entire painting process, from line draft to finished product. follow. During the drawing process, the line changes are amazing. The final video result is very similar to the original image: Let’s take a look at a complete drawing.

From RLHF to DPO to TDPO, large model alignment algorithms are already 'token-level' From RLHF to DPO to TDPO, large model alignment algorithms are already 'token-level' Jun 24, 2024 pm 03:04 PM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com In the development process of artificial intelligence, the control and guidance of large language models (LLM) has always been one of the core challenges, aiming to ensure that these models are both powerful and safe serve human society. Early efforts focused on reinforcement learning methods through human feedback (RL

Topping the list of open source AI software engineers, UIUC's agent-less solution easily solves SWE-bench real programming problems Topping the list of open source AI software engineers, UIUC's agent-less solution easily solves SWE-bench real programming problems Jul 17, 2024 pm 10:02 PM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com The authors of this paper are all from the team of teacher Zhang Lingming at the University of Illinois at Urbana-Champaign (UIUC), including: Steven Code repair; Deng Yinlin, fourth-year doctoral student, researcher

arXiv papers can be posted as 'barrage', Stanford alphaXiv discussion platform is online, LeCun likes it arXiv papers can be posted as 'barrage', Stanford alphaXiv discussion platform is online, LeCun likes it Aug 01, 2024 pm 05:18 PM

cheers! What is it like when a paper discussion is down to words? Recently, students at Stanford University created alphaXiv, an open discussion forum for arXiv papers that allows questions and comments to be posted directly on any arXiv paper. Website link: https://alphaxiv.org/ In fact, there is no need to visit this website specifically. Just change arXiv in any URL to alphaXiv to directly open the corresponding paper on the alphaXiv forum: you can accurately locate the paragraphs in the paper, Sentence: In the discussion area on the right, users can post questions to ask the author about the ideas and details of the paper. For example, they can also comment on the content of the paper, such as: "Given to

A significant breakthrough in the Riemann Hypothesis! Tao Zhexuan strongly recommends new papers from MIT and Oxford, and the 37-year-old Fields Medal winner participated A significant breakthrough in the Riemann Hypothesis! Tao Zhexuan strongly recommends new papers from MIT and Oxford, and the 37-year-old Fields Medal winner participated Aug 05, 2024 pm 03:32 PM

Recently, the Riemann Hypothesis, known as one of the seven major problems of the millennium, has achieved a new breakthrough. The Riemann Hypothesis is a very important unsolved problem in mathematics, related to the precise properties of the distribution of prime numbers (primes are those numbers that are only divisible by 1 and themselves, and they play a fundamental role in number theory). In today's mathematical literature, there are more than a thousand mathematical propositions based on the establishment of the Riemann Hypothesis (or its generalized form). In other words, once the Riemann Hypothesis and its generalized form are proven, these more than a thousand propositions will be established as theorems, which will have a profound impact on the field of mathematics; and if the Riemann Hypothesis is proven wrong, then among these propositions part of it will also lose its effectiveness. New breakthrough comes from MIT mathematics professor Larry Guth and Oxford University

Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Jul 19, 2024 am 01:29 AM

If the answer given by the AI ​​model is incomprehensible at all, would you dare to use it? As machine learning systems are used in more important areas, it becomes increasingly important to demonstrate why we can trust their output, and when not to trust them. One possible way to gain trust in the output of a complex system is to require the system to produce an interpretation of its output that is readable to a human or another trusted system, that is, fully understandable to the point that any possible errors can be found. For example, to build trust in the judicial system, we require courts to provide clear and readable written opinions that explain and support their decisions. For large language models, we can also adopt a similar approach. However, when taking this approach, ensure that the language model generates

LLM is really not good for time series prediction. It doesn't even use its reasoning ability. LLM is really not good for time series prediction. It doesn't even use its reasoning ability. Jul 15, 2024 pm 03:59 PM

Can language models really be used for time series prediction? According to Betteridge's Law of Headlines (any news headline ending with a question mark can be answered with "no"), the answer should be no. The fact seems to be true: such a powerful LLM cannot handle time series data well. Time series, that is, time series, as the name suggests, refers to a set of data point sequences arranged in the order of time. Time series analysis is critical in many areas, including disease spread prediction, retail analytics, healthcare, and finance. In the field of time series analysis, many researchers have recently been studying how to use large language models (LLM) to classify, predict, and detect anomalies in time series. These papers assume that language models that are good at handling sequential dependencies in text can also generalize to time series.

The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source Jul 17, 2024 am 02:46 AM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com. Introduction In recent years, the application of multimodal large language models (MLLM) in various fields has achieved remarkable success. However, as the basic model for many downstream tasks, current MLLM consists of the well-known Transformer network, which

See all articles