


Amazon Cloud Innovation 'Neural Sparse Retrieval”: Only text matching is needed to achieve semantic search

The AIxiv column is a column where academic and technical content is published on this site. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com
The authors of this article are Dr. Yang Yang, machine learning leader and machine learning engineers Geng Zhichao and Guan Cong from the OpenSearch China R&D team. OpenSearch is a pure open source search and real-time analysis engine project initiated by Amazon Cloud Technology. The software currently has over 500 million downloads, and the community has more than 70 corporate partners around the world.
The stability of correlation performance on different queries: Zero-shot semantic retrieval requires the semantic coding model to have good correlation performance on data sets with different backgrounds, that is, the language model is required to be used out of the box, without the user having to Fine-tune on the data set. Taking advantage of the homologous characteristics of sparse coding and term vectors, Neural Sparse can downgrade to text matching when encountering unfamiliar text expressions (industry-specific words, abbreviations, etc.), thereby avoiding outrageous search results. Time efficiency of online search: The significance of low latency for real-time search applications is obvious. Currently popular semantic retrieval methods generally include two processes: semantic encoding and indexing. The speed of these two determines the end-to-end retrieval efficiency of a retrieval application. Neural Sparse's unique doc-only mode can achieve semantic retrieval accuracy comparable to first-class language models at a latency similar to text matching without online coding. Index storage resource consumption: Commercial retrieval applications are very sensitive to storage resource consumption. When indexing massive amounts of data, the running cost of a search engine is strongly related to the consumption of storage resources. In related experiments, Neural Sparse only required 1/10 of k-NN indexing to index the same size of data. At the same time, the memory consumption is also much smaller than k-NN index.
Two -stage search speed comparison Opensearch currently has 3 models open source. Relevant registration information can be obtained in official documents. Let's take amazon/neural-sparse/opensearch-neural-sparse-encoding-v1 as an example. First use the register API to register:
Comparison between sparse encoding and dense encoding
to to to to she herself herself herself herself she herself herself she she herself she Shen Shen Shen she Shen Shen Shen her all takes she for According to the BEIR article And, since most of the current dense coding models are based on fine-tuning on the MSMAARCO data set, the model performs very well on this data set. However, when conducting zero-shot tests on other BEIR data sets, the correlation of the dense coding model cannot exceed BM25 on about 60% to 70% of the data sets. This can also be seen from our own replicated comparative experiments (see table below).
In experiments on the BEIR benchmark, we can see that the two methods of Neural Sparse have higher correlation scores compared to the dense coding model and BM25.
As mentioned in the previous article, during the sparse encoding process, the text is converted into a set of tokens and weights. This transformation produces a large number of tokens with low weights. Although these tokens take up most of the time in the search process, their contribution to the final search results is not significant.
Therefore, we propose a new search strategy that first filters out these low-weight tokens in the first search and relies only on high-weight tokens to locate higher-ranking documents. Then on these selected documents, the previously filtered low-weight tokens are reintroduced for a second detailed scoring to obtain the final score.
Through this method, we significantly reduce the delay in two parts: First, in the first stage of search, only high-weight tokens are matched in the inverted index, greatly reducing unnecessary calculations time. Secondly, when scoring again within a precise small range of result documents, we only calculate the scores of low-weight tokens for potentially relevant documents, further optimizing the processing time.
First set the cluster configuration so that the model can run on the local cluster. PUT /_cluster/settings{"transient" : {"plugins.ml_commons.allow_registering_model_via_url" : true,"plugins.ml_commons.only_run_on_ml_node" : false,"plugins.ml_commons.native_memory_threshold" : 99}}
POST /_plugins/_ml/models/_register?deploy=true{ "name": "amazon/neural-sparse/opensearch-neural-sparse-encoding-v1", "version": "1.0.1", "model_format": "TORCH_SCRIPT"}
{"task_id": "<task_id>","status": "CREATED"}
GET /_plugins/_ml/tasks/
{"model_id": "<model_id>","task_type": "REGISTER_MODEL","function_name": "SPARSE_TOKENIZE","state": "COMPLETED","worker_node": ["wubXZX7xTIC7RW2z8nzhzw"], "create_time":1701390988405,"last_update_time": 1701390993724,"is_async": true}
PUT /_ingest/pipeline/neural-sparse-pipeline{ "description": "An example neural sparse encoding pipeline", "processors" : [ { "sparse_encoding": { "model_id": "<model_id>", "field_map": { "passage_text": "passage_embedding" } } } ]}
The method of establishing a two-stage accelerated search pipeline with default parameters is as follows. For more detailed parameter settings and meanings, please refer to the official OpenSearch documentation of 2.15 and later versions.
PUT /_search/pipeline/two_phase_search_pipeline{ "request_processors": [ { "neural_sparse_two_phase_processor": { "tag": "neural-sparse", "description": "This processor is making two-phase processor." } } ]}
神经稀疏搜索利用 rank_features 字段类型来存储编码得到的词元和相对应的权重。索引将使用上述预处理器来编码文本。我们可以按以下方式创建索一个包含两阶段搜索加速管线的索引(如果不想开启此功能,可把 `two_phase_search_pipeline` 替换为 `_none` 或删除 `settings.search` 这一配置单元)。 在索引中进行稀疏语义搜索的接口如下,将 PUT /my-neural-sparse-index{ "settings": { "ingest":{ "default_pipeline":"neural-sparse-pipeline" }, "search":{ "default_pipeline":"two_phase_search_pipeline" } }, "mappings": { "properties": { "passage_embedding": { "type": "rank_features" }, "passage_text": { "type": "text" } } }}
PUT /my-neural-sparse-index/_doc/{ "passage_text": "Hello world"}
GET my-neural-sparse-index/_search{ "query":{ "neural_sparse":{ "passage_embedding":{ "query_text": "Hi world", "model_id": <model_id> } } }}
The above is the detailed content of Amazon Cloud Innovation 'Neural Sparse Retrieval”: Only text matching is needed to achieve semantic search. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics











It is also a Tusheng video, but PaintsUndo has taken a different route. ControlNet author LvminZhang started to live again! This time I aim at the field of painting. The new project PaintsUndo has received 1.4kstar (still rising crazily) not long after it was launched. Project address: https://github.com/lllyasviel/Paints-UNDO Through this project, the user inputs a static image, and PaintsUndo can automatically help you generate a video of the entire painting process, from line draft to finished product. follow. During the drawing process, the line changes are amazing. The final video result is very similar to the original image: Let’s take a look at a complete drawing.

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com In the development process of artificial intelligence, the control and guidance of large language models (LLM) has always been one of the core challenges, aiming to ensure that these models are both powerful and safe serve human society. Early efforts focused on reinforcement learning methods through human feedback (RL

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com The authors of this paper are all from the team of teacher Zhang Lingming at the University of Illinois at Urbana-Champaign (UIUC), including: Steven Code repair; Deng Yinlin, fourth-year doctoral student, researcher

cheers! What is it like when a paper discussion is down to words? Recently, students at Stanford University created alphaXiv, an open discussion forum for arXiv papers that allows questions and comments to be posted directly on any arXiv paper. Website link: https://alphaxiv.org/ In fact, there is no need to visit this website specifically. Just change arXiv in any URL to alphaXiv to directly open the corresponding paper on the alphaXiv forum: you can accurately locate the paragraphs in the paper, Sentence: In the discussion area on the right, users can post questions to ask the author about the ideas and details of the paper. For example, they can also comment on the content of the paper, such as: "Given to

Recently, the Riemann Hypothesis, known as one of the seven major problems of the millennium, has achieved a new breakthrough. The Riemann Hypothesis is a very important unsolved problem in mathematics, related to the precise properties of the distribution of prime numbers (primes are those numbers that are only divisible by 1 and themselves, and they play a fundamental role in number theory). In today's mathematical literature, there are more than a thousand mathematical propositions based on the establishment of the Riemann Hypothesis (or its generalized form). In other words, once the Riemann Hypothesis and its generalized form are proven, these more than a thousand propositions will be established as theorems, which will have a profound impact on the field of mathematics; and if the Riemann Hypothesis is proven wrong, then among these propositions part of it will also lose its effectiveness. New breakthrough comes from MIT mathematics professor Larry Guth and Oxford University

If the answer given by the AI model is incomprehensible at all, would you dare to use it? As machine learning systems are used in more important areas, it becomes increasingly important to demonstrate why we can trust their output, and when not to trust them. One possible way to gain trust in the output of a complex system is to require the system to produce an interpretation of its output that is readable to a human or another trusted system, that is, fully understandable to the point that any possible errors can be found. For example, to build trust in the judicial system, we require courts to provide clear and readable written opinions that explain and support their decisions. For large language models, we can also adopt a similar approach. However, when taking this approach, ensure that the language model generates

Can language models really be used for time series prediction? According to Betteridge's Law of Headlines (any news headline ending with a question mark can be answered with "no"), the answer should be no. The fact seems to be true: such a powerful LLM cannot handle time series data well. Time series, that is, time series, as the name suggests, refers to a set of data point sequences arranged in the order of time. Time series analysis is critical in many areas, including disease spread prediction, retail analytics, healthcare, and finance. In the field of time series analysis, many researchers have recently been studying how to use large language models (LLM) to classify, predict, and detect anomalies in time series. These papers assume that language models that are good at handling sequential dependencies in text can also generalize to time series.

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com. Introduction In recent years, the application of multimodal large language models (MLLM) in various fields has achieved remarkable success. However, as the basic model for many downstream tasks, current MLLM consists of the well-known Transformer network, which
