Table of Contents
Table of contents
What is DeepSeek R1?
Benefits of Using DeepSeek R1 for RAG System
Steps to Build a RAG System Using DeepSeek R1
Install Prerequisites
1. Install Dependencies
2. Enter OpenAI API Key
3. Set Up Environment Variables
4. Initialize OpenAI Embeddings
5. Load and Split a PDF Document
6. Create and Store a Vector Database
7. Retrieve Similar Texts Using a Similarity Threshold
8. Query for Similar Documents
9. Build a RAG (Retrieval-Augmented Generation) Chain
10. Load a Connection to an LLM (DeepSeek Model)
11. Create a RAG-Based Chain
12. Test the RAG Chain
Code to Build a RAG System Using DeepSeek R1
Install OpenAI and LangChain dependencies
Enter Open AI API Key
Setup Environment Variables
Open AI Embedding Models
Create a Vector DB and persist on the disk
Similarity with Threshold Retrieval
Build a RAG Chain
Load Connection to LLM
LangChain Syntax for RAG Chain
Conclusion
Home Technology peripherals AI How to Build a RAG System Using DeepSeek R1?

How to Build a RAG System Using DeepSeek R1?

Mar 07, 2025 am 09:39 AM

I have been reading a lot about RAG and AI Agents, but with the release of new models like DeepSeek V3 and DeepSeek R1, it seems that the possibility of building efficient RAG systems has significantly improved, offering better retrieval accuracy, enhanced reasoning capabilities, and more scalable architectures for real-world applications. The integration of more sophisticated retrieval mechanisms, enhanced fine-tuning options, and multi-modal capabilities are changing how AI agents interact with data. It raises questions about whether traditional RAG approaches are still the best way forward or if newer architectures can provide more efficient and contextually aware solutions.

Retrieval-augmented generation (RAG) systems have revolutionized the way AI models interact with data by combining retrieval-based and generative approaches to produce more accurate and context-aware responses. With the advent of DeepSeek R1, an open-source model known for its efficiency and cost-effectiveness, building an effective RAG system has become more accessible and practical. In this article, we are building an RAG system using DeepSeek R1.

Table of contents

  • What is DeepSeek R1?
  • Benefits of Using DeepSeek R1 for RAG System
  • Steps to Build a RAG System Using DeepSeek R1
  • Code to Build a RAG System Using DeepSeek R1
  • Conclusion

What is DeepSeek R1?

DeepSeek R1 is an open-source AI model developed with the goal of providing high-quality reasoning and retrieval capabilities at a fraction of the cost of proprietary models like OpenAI’s offerings. It features an MIT license, making it commercially viable and suitable for a wide range of applications. Also, this powerful model, lets you see the CoT but the OpenAI o1 and o1-mini don’t show any reasoning token. 

To know how DeepSeek R1 is challenging the OpenAI o1 model: DeepSeek R1 vs OpenAI o1: Which One is Faster, Cheaper and Smarter? 

Benefits of Using DeepSeek R1 for RAG System

Building a Retrieval-Augmented Generation (RAG) system using DeepSeek-R1 offers several notable advantages:

1. Advanced Reasoning Capabilities: DeepSeek-R1 emulates human-like reasoning by analyzing and processing information step by step before reaching conclusions. This approach enhances the system’s ability to handle complex queries, particularly in areas requiring logical inference, mathematical reasoning, and coding tasks.

2. Open-Source Accessibility: Released under the MIT license, DeepSeek-R1 is fully open-source, allowing developers unrestricted access to its model. This openness facilitates customization, fine-tuning, and integration into various applications without the constraints often associated with proprietary models.

3. Competitive Performance: Benchmark tests indicate that DeepSeek-R1 performs on par with, or even surpasses, leading models like OpenAI’s o1 in tasks involving reasoning, mathematics, and coding. This level of performance ensures that an RAG system built with DeepSeek-R1 can deliver high-quality, accurate responses across diverse and challenging queries.

4. Transparency in Thought Process: DeepSeek-R1 employs a “chain-of-thought” methodology, making its reasoning steps visible during inference. This transparency helps debug and refine the system while building user trust by providing clear insights into the decision-making process.

5. Cost-Effectiveness: The open-source nature of DeepSeek-R1 eliminates licensing fees, and its efficient architecture reduces computational resource requirements. These factors contribute to a more cost-effective solution for organizations looking to implement sophisticated RAG systems without incurring significant expenses.

Integrating DeepSeek-R1 into an RAG system provides a potent combination of advanced reasoning abilities, transparency, performance, and cost efficiency, making it a compelling choice for developers and organizations aiming to enhance their AI capabilities.

Steps to Build a RAG System Using DeepSeek R1

The script is a Retrieval-Augmented Generation (RAG) pipeline that:

  • Loads and processes a PDF document by splitting it into pages and extracting text.
  • Stores vectorized representations of the text in a database (ChromaDB).
  • Retrieves relevant content using similarity search when a query is asked.
  • Uses an LLM (DeepSeek model) to generate responses based on the retrieved text.

Install Prerequisites

  • Download Ollama: Click here to download
  • For Linux users:Run the following command in your terminal:
curl -fsSL https://ollama.com/install.sh | sh
Copy after login
Copy after login

after this pull the DeepSeek R1:1.5b using:

ollama pull deepseek-r1:1.5b
Copy after login
Copy after login

This will take a moment to download:

ollama pull deepseek-r1:1.5b

pulling manifest
pulling aabd4debf0c8... 100% ▕████████████████▏ 1.1 GB                         
pulling 369ca498f347... 100% ▕████████████████▏  387 B                         
pulling 6e4c38e1172f... 100% ▕████████████████▏ 1.1 KB                         
pulling f4d24e9138dd... 100% ▕████████████████▏  148 B                         
pulling a85fe2a2e58e... 100% ▕████████████████▏  487 B                         
verifying sha256 digest 
writing manifest 
success 
Copy after login
Copy after login

After doing this, open your Jupyter Notebook and start with the coding part:

1. Install Dependencies

Before running, the script installs the required Python libraries:

  • langchain → A framework for building applications using Large Language Models (LLMs).
  • langchain-openai → Provides integration with OpenAI services.
  • langchain-community → Adds support for various document loaders and utilities.
  • langchain-chroma → Enables integration with ChromaDB, a vector database.

2. Enter OpenAI API Key

To access OpenAI’s embedding model, the script prompts the user to securely enter their API key using getpass(). This prevents exposing credentials in plain text.

3. Set Up Environment Variables

The script stores the API key as an environment variable. This allows other parts of the code to access OpenAI services without hardcoding credentials, which improves security.

4. Initialize OpenAI Embeddings

The script initializes an OpenAI embedding model called "text-embedding-3-small". This model converts text into vector embeddings, which are high-dimensional numerical representations of the text’s meaning. These embeddings are later used to compare and retrieve similar content.

5. Load and Split a PDF Document

A PDF file (AgenticAI.pdf) is loaded and split into pages. Each page text is extracted, which allows for smaller and more manageable text chunks instead of processing the entire document as a single unit.

6. Create and Store a Vector Database

  • The extracted text from the PDF is converted into vector embeddings.
  • These embeddings are stored in ChromaDB, a high-performance vector database.
  • The database uses cosine similarity, ensuring efficient retrieval of text with a high degree of semantic similarity.

7. Retrieve Similar Texts Using a Similarity Threshold

A retriever is created using ChromaDB, which:

  • Searches for the top 3 most similar documents based on a given query.
  • Filters results based on a similarity threshold of 0.3, meaning documents must have at least 30% similarity to qualify as relevant.

8. Query for Similar Documents

Two test queries are used:

  1. "What is the old capital of India?"
    • No results were found, which indicates that the stored documents do not contain relevant information.
  2. "What is Agentic AI?"
    • Successfully retrieves relevant text, demonstrating that the system can fetch meaningful context.

9. Build a RAG (Retrieval-Augmented Generation) Chain

The script sets up a RAG pipeline, which ensures that:

  • Text retrieval happens before generating an answer.
  • The model’s response is based strictly on retrieved content, preventing hallucinations.
  • A prompt template is used to instruct the model to generate structured responses.

10. Load a Connection to an LLM (DeepSeek Model)

Instead of OpenAI’s GPT, the script loads DeepSeek-R1 (1.5B parameters), a powerful LLM optimized for retrieval-based tasks.

11. Create a RAG-Based Chain

LangChain’s Retrieval module is used to:

  • Fetch relevant content from the vector database.
  • Format a structured response using a prompt template.
  • Generate a concise answer with the DeepSeek model.

12. Test the RAG Chain

The script runs a test query:
"Tell the Leaders’ Perspectives on Agentic AI"

The LLM generates a fact-based response strictly using the retrieved context.

The system retrieves relevant information from the database.

Code to Build a RAG System Using DeepSeek R1

Here’s the code:

Install OpenAI and LangChain dependencies

curl -fsSL https://ollama.com/install.sh | sh
Copy after login
Copy after login

Enter Open AI API Key

ollama pull deepseek-r1:1.5b
Copy after login
Copy after login

Setup Environment Variables

ollama pull deepseek-r1:1.5b

pulling manifest
pulling aabd4debf0c8... 100% ▕████████████████▏ 1.1 GB                         
pulling 369ca498f347... 100% ▕████████████████▏  387 B                         
pulling 6e4c38e1172f... 100% ▕████████████████▏ 1.1 KB                         
pulling f4d24e9138dd... 100% ▕████████████████▏  148 B                         
pulling a85fe2a2e58e... 100% ▕████████████████▏  487 B                         
verifying sha256 digest 
writing manifest 
success 
Copy after login
Copy after login

Open AI Embedding Models

!pip install langchain==0.3.11
!pip install langchain-openai==0.2.12
!pip install langchain-community==0.3.11
!pip install langchain-chroma==0.1.4
Copy after login

Create a Vector DB and persist on the disk

from getpass import getpass
OPENAI_KEY = getpass('Enter Open AI API Key: ')
Copy after login

Similarity with Threshold Retrieval

import os
os.environ['OPENAI_API_KEY'] = OPENAI_KEY
Copy after login
from langchain_openai import OpenAIEmbeddings
openai_embed_model = OpenAIEmbeddings(model='text-embedding-3-small')
Copy after login
from langchain_community.document_loaders import PyPDFLoader
loader = PyPDFLoader('AgenticAI.pdf')
pages = loader.load_and_split()
texts = [doc.page_content for doc in pages]

from langchain_chroma import Chroma
chroma_db = Chroma.from_texts(
texts=texts,
collection_name='db_docs',
collection_metadata={"hnsw:space": "cosine"}, # Set distance function to cosine
embedding=openai_embed_model
)
Copy after login

How to Build a RAG System Using DeepSeek R1?

Build a RAG Chain

similarity_threshold_retriever = chroma_db.as_retriever(search_type="similarity_score_threshold",search_kwargs={"k": 3,"score_threshold": 0.3})

query = "what is the old capital of India?"
top3_docs = similarity_threshold_retriever.invoke(query)
top3_docs
Copy after login

Load Connection to LLM

[]
Copy after login

LangChain Syntax for RAG Chain

query = "What is Agentic AI?"
top3_docs = similarity_threshold_retriever.invoke(query)
top3_docs
Copy after login

How to Build a RAG System Using DeepSeek R1?

Checkout our detailed articles on DeepSeek working and comparison with similar models:

  • DeepSeek R1- OpenAI’s o1 Biggest Competitor is HERE!
  • Building AI Application with DeepSeek-V3
  • DeepSeek-V3 vs GPT-4o vs Llama 3.3 70B
  • DeepSeek V3 vs GPT-4o: Which is Better?
  • DeepSeek R1 vs OpenAI o1: Which One is Better?
  • How to Access DeepSeek Janus Pro 7B?

Conclusion

Building a RAG system using DeepSeek R1 provides a cost-effective and powerful way to enhance document retrieval and response generation. With its open-source nature and strong reasoning capabilities, it is a great alternative to proprietary solutions. Businesses and developers can leverage its flexibility to create AI-driven applications tailored to their needs.

Want to build applications using DeepSeek? Checkout our Free DeepSeek Course today!

The above is the detailed content of How to Build a RAG System Using DeepSeek R1?. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

Java Tutorial
1664
14
PHP Tutorial
1268
29
C# Tutorial
1248
24
Getting Started With Meta Llama 3.2 - Analytics Vidhya Getting Started With Meta Llama 3.2 - Analytics Vidhya Apr 11, 2025 pm 12:04 PM

Meta's Llama 3.2: A Leap Forward in Multimodal and Mobile AI Meta recently unveiled Llama 3.2, a significant advancement in AI featuring powerful vision capabilities and lightweight text models optimized for mobile devices. Building on the success o

10 Generative AI Coding Extensions in VS Code You Must Explore 10 Generative AI Coding Extensions in VS Code You Must Explore Apr 13, 2025 am 01:14 AM

Hey there, Coding ninja! What coding-related tasks do you have planned for the day? Before you dive further into this blog, I want you to think about all your coding-related woes—better list those down. Done? – Let&#8217

AV Bytes: Meta's Llama 3.2, Google's Gemini 1.5, and More AV Bytes: Meta's Llama 3.2, Google's Gemini 1.5, and More Apr 11, 2025 pm 12:01 PM

This week's AI landscape: A whirlwind of advancements, ethical considerations, and regulatory debates. Major players like OpenAI, Google, Meta, and Microsoft have unleashed a torrent of updates, from groundbreaking new models to crucial shifts in le

Selling AI Strategy To Employees: Shopify CEO's Manifesto Selling AI Strategy To Employees: Shopify CEO's Manifesto Apr 10, 2025 am 11:19 AM

Shopify CEO Tobi Lütke's recent memo boldly declares AI proficiency a fundamental expectation for every employee, marking a significant cultural shift within the company. This isn't a fleeting trend; it's a new operational paradigm integrated into p

GPT-4o vs OpenAI o1: Is the New OpenAI Model Worth the Hype? GPT-4o vs OpenAI o1: Is the New OpenAI Model Worth the Hype? Apr 13, 2025 am 10:18 AM

Introduction OpenAI has released its new model based on the much-anticipated “strawberry” architecture. This innovative model, known as o1, enhances reasoning capabilities, allowing it to think through problems mor

A Comprehensive Guide to Vision Language Models (VLMs) A Comprehensive Guide to Vision Language Models (VLMs) Apr 12, 2025 am 11:58 AM

Introduction Imagine walking through an art gallery, surrounded by vivid paintings and sculptures. Now, what if you could ask each piece a question and get a meaningful answer? You might ask, “What story are you telling?

Newest Annual Compilation Of The Best Prompt Engineering Techniques Newest Annual Compilation Of The Best Prompt Engineering Techniques Apr 10, 2025 am 11:22 AM

For those of you who might be new to my column, I broadly explore the latest advances in AI across the board, including topics such as embodied AI, AI reasoning, high-tech breakthroughs in AI, prompt engineering, training of AI, fielding of AI, AI re

3 Methods to Run Llama 3.2 - Analytics Vidhya 3 Methods to Run Llama 3.2 - Analytics Vidhya Apr 11, 2025 am 11:56 AM

Meta's Llama 3.2: A Multimodal AI Powerhouse Meta's latest multimodal model, Llama 3.2, represents a significant advancement in AI, boasting enhanced language comprehension, improved accuracy, and superior text generation capabilities. Its ability t

See all articles