Table of Contents
Learning Objectives
Table of contents
Why Use TypeScript?
Benefits of LlamaIndex
Why LlamaIndex TypeScript?
What is Agentic RAG?
Setting Development Environment
A Simple Math Agent
Step 1: Set Up Work Environment
Step 2: Import Required Modules
Step 3: Create an Ollama Model Instance
Step 4: Create Tools for the Math Agent
Step 5: Create the Math Agent
Output
Start to Build the RAG Application
Implementing Load and Indexing Module
Importing Packages
Creating Ollama Model Instances
Setting the System Models
Implementing indexAndStorage Function
Implementing Query Module
Implementing Load and Query
Implementing App.ts
Running the Application
Conclusion
Key Takeaways
Frequently Asked Questions
Home Technology peripherals AI Guide to Agentic RAG Using LlamaIndex TypeScript

Guide to Agentic RAG Using LlamaIndex TypeScript

Apr 23, 2025 am 10:21 AM

Imagine having a personal research assistant who not only understands your question but intelligently decides how to find answers. Diving into your document library for some queries while generating creative responses for others. This is what is possible with an Agentic RAG Using LlamaIndex TypeScript system.

Whether you are looking to create a literature analysis system, a technical documentation assistant, or any knowledge-intensive application, the approaches outlined in this blog post provide a practical foundation you can build. This blog post will take you on a hands-on journey through building such a system using LlamaIndex TypeScript, from setting up local models to implementing specialized tools that work together to deliver remarkably helpful responses.

Learning Objectives

  • Understand the fundamentals of Agentic RAG Using LlamaIndex TypeScript for building intelligent agents.
  • Learn how to set up the development environment and install necessary dependencies.
  • Explore tool creation in LlamaIndex, including addition and division operations.
  • Implement a math agent using LlamaIndex TypeScript for executing queries.
  • Execute and test the agent to process mathematical operations efficiently.

This article was published as a part of theData Science Blogathon.

Table of contents

  • Why Use TypeScript?
  • Benefits of LlamaIndex
  • Why LlamaIndex TypeScript?
  • What is Agentic RAG?
  • Setting Development Environment
  • A Simple Math Agent
  • Start to Build the RAG Application
  • Implementing Load and Indexing Module
  • Implementing Query Module
  • Implementing App.ts
  • Running the Application
  • Conclusion
  • Frequently Asked Questions

Why Use TypeScript?

TypeScript offers significant advantages for building LLM-based AI application

  • Type Safety: TypeScript’s static typing catches errors during development rather than at runtime.
  • Better IDE Support: Auto-completion and intelligent suggestions make development faster
  • Improve Maintainability: Type definition makes code more readable and self-documenting
  • Seamless Javascript Integration: TypeScript works with existing Javascript libraries
  • Scalability: TypeScript’s structure helps manage complexity as your RAG application grows.
  • Frameworks: Vite, NextJS, etc well designed robust web frameworks that seamlessly connect with TypeScript which makes building AI-based web applications easy and scalable.

Benefits of LlamaIndex

LlamaIndex provides a powerful framework for building LLM-based AI applications.

  • Simplified Data Ingestion: Easy methods to load and process documents on the device or the cloud using LlamaParse
  • Vector Storage: Built-in support for embedding and retrieving semantic information with various integrations with industry-standard databases such as ChromaDB, Milvus, Weaviet, and pgvector.
  • Tool Integration: Framework for creating and managing multiple specialized tools
  • Agent Plugging: You can build or plug third-party agents easily with LlamaIndex.
  • Query Engine Flexibility: Customizable query processing for different use cases
  • Persistence Support: Ability to save and load indexes for efficient reuse

Why LlamaIndex TypeScript?

LlamaIndex is a popular AI framework for connecting custom data sources to large language models. While originally implementers in Python, LlamaIndex now offers a TypeScript version that brings its powerful capabilities to the JavaScript ecosystem. This is particularly valuable for:

  • Web applications and Node.js services.
  • JavaScript/TypeScript developers who want to stay within their preferred language.
  • Projects that need to run in browser environments.

What is Agentic RAG?

Before diving into implementation, let’s clarify what it means by Agetntic RAG.

  • RAG(Retrieval-Augmented Generation) is a technique that enhances language model outputs by first retrieving relevant information from a knowledge base, and then using that information to generate more accurate, factual responses.
  • Agentic systems involve AI that can decide which actions to take based on user queries, effectively functioning as an intelligent assistant that chooses appropriate tools to fulfill requests.

Guide to Agentic RAG Using LlamaIndex TypeScript

An Agentic RAG system combines these approaches, creating an AI assistant that can retrieve information from a knowledge base and use other tools when appropriate. Based on the nature of the user’s question, it decides whether to use its built-in knowledge, query the vector database, or call external tools.

Setting Development Environment

Install Node in Windows

To install Node into Windows follow these steps.

# Download and install fnm:
winget install Schniz.fnm

# Download and install Node.js:
fnm install 22

# Verify the Node.js version:
node -v # Should print "v22.14.0".

# Verify npm version:
npm -v # Should print "10.9.2".
Copy after login

For other systems, you must follow this.

A Simple Math Agent

Let’s create a simple math agent to understand the LlamaIndex TypeScript API.

Step 1: Set Up Work Environment

Create a new directory and navigate into it and Initialize a Node.js project and install dependencies.

$ md simple-agent
$ cd simple-agent
$ npm init
$ npm install llamaindex @llamaindex/ollama 
Copy after login

We will create two tools for the math agent.

  • An addition tool that adds two numbers
  • A divide tool that divides numbers

Step 2: Import Required Modules

Add the following imports to your script:

import { agent, Settings, tool } from "llamaindex";
import { z } from "zod";
import { Ollama, OllamaEmbedding } from "@llamaindex/ollama";
Copy after login

Step 3: Create an Ollama Model Instance

Instantiate the Llama model:

const llama3 = new Ollama({
  model: "llama3.2:1b",
});
Copy after login

Now using Settings you directly set the Ollama model for the system’s main model or use a different model directly on the agent.

Settings.llm = llama3;
Copy after login

Step 4: Create Tools for the Math Agent

Add and divide tools

const addNumbers = tool({
  name: "SumNubers",
  description: "use this function to sun two numbers",
  parameters: z.object({
    a: z.number().describe("The first number"),
    b: z.number().describe("The second number"),
  }),
  execute: ({ a, b }: { a: number; b: number }) => `${a   b}`,
});
Copy after login

Here we will create a tool named addNumber using LlamaIndex tool API, The tool parameters object contains Four main parameters.

  • name: The name of the tool
  • description: The description of the tool that will be used by the LLM to understand the tool’s capability.
  • parameter: The parameters of the tool, where I have used Zod libraries for data validation.
  • execute: The function which will be executed by the tool.

In the same way, we will create the divideNumber tool.

const divideNumbers = tool({
  name: "divideNUmber",
  description: "use this function to divide two numbers",
  parameters: z.object({
    a: z.number().describe("The dividend a to divide"),
    b: z.number().describe("The divisor b to divide by"),
  }),
  execute: ({ a, b }: { a: number; b: number }) => `${a / b}`,
});
Copy after login

Step 5: Create the Math Agent

Now in the main function, we will create a math agent that will use the tools for calculation.

async function main(query: string) {
  const mathAgent = agent({
    tools: [addNumbers, divideNumbers],
    llm: llama3,
    verbose: false,
  });

  const response = await mathAgent.run(query);
  console.log(response.data);
}

// driver code for running the application

const query = "Add two number 5 and 7 and divide by 2"

void main(query).then(() => {
  console.log("Done");
});
Copy after login

If you set your LLM directly to through Setting then you don’t have to put the LLM parameters of the agent. If you want to use different models for different agents then you must put llm parameters explicitly.

After that response is the await function of the mathAgent which will run the query through the llm and return back the data.

Output

Guide to Agentic RAG Using LlamaIndex TypeScript

Second query “If the total number of boys in a class is 50 and girls is 30, what is the total number of students in the class?”;

const query =
  "If the total number of boys in a class is 50 and girls is 30, what is the total number of students in the class?";
void main(query).then(() => {
  console.log("Done");
});
Copy after login

Output

Guide to Agentic RAG Using LlamaIndex TypeScript

Wow, our little Llama3.2 1B model can handle agents well and calculate accurately. Now, let’s dive deep into the main part of the project.

Start to Build the RAG Application

To Set up the development environment follow the below instruction

Create folder name agentic-rag-app:

$ md agentic-rag-app
$ cd agentic-rag-app
$ npm init
$ npm install llamaindex @llamaindex/ollama 
Copy after login

Also pull necessary models from Ollama here, Llama3.2:1b and nomic-embed-text.

In our application, we will have four module:

  • load-index module for losing and indexing text file
  • query-paul module for querying the Paul Graham essay
  • constant module for storing reusable constant
  • app module for running the application

First, create the constants file and Data folder

Create a constant.ts file in the project root.

const constant = {
  STORAGE_DIR: "./storage",
  DATA_FILE: "data/pual-essay.txt",
};

export default constant;
Copy after login

It is an object containing necessary constants which will be used throughout the application multiple times. It is a best practice to put something like that in a separate place. After that create a data folder and put the text file in it.

Data source Link.

Implementing Load and Indexing Module

Let’s see the below diagram to understand the code implementation.

Guide to Agentic RAG Using LlamaIndex TypeScript

Now, create a file name load-index.ts in the project root:

Importing Packages

import { Settings, storageContextFromDefaults } from "llamaindex";
import { Ollama, OllamaEmbedding } from "@llamaindex/ollama";
import { Document, VectorStoreIndex } from "llamaindex";
import fs from "fs/promises";
import constant from "./constant";
Copy after login

Creating Ollama Model Instances

const llama3 = new Ollama({
  model: "llama3.2:1b",
});

const nomic = new OllamaEmbedding({
  model: "nomic-embed-text",
});
Copy after login

Setting the System Models

Settings.llm = llama3;
Settings.embedModel = nomic;
Copy after login

Implementing indexAndStorage Function

async function indexAndStorage() {
  try {
    // set up persistance storage
    const storageContext = await storageContextFromDefaults({
      persistDir: constant.STORAGE_DIR,
    });

    // load docs
    const essay = await fs.readFile(constant.DATA_FILE, "utf-8");
    const document = new Document({
      text: essay,
      id_: "essay",
    });

    // create and persist index
    await VectorStoreIndex.fromDocuments([document], {
      storageContext,
    });

    console.log("index and embeddings stored successfully!");
  } catch (error) {
    console.log("Error during indexing: ", error);
  }
}
Copy after login

The above code will create a persistent storage space for indexing and embedding files. Then it will fetch the text data from the project data directory and create a document from that text file using the Document method from LlamaIndex and in the end, it will start creating a Vector index from that document using the VectorStoreIndex method.

Export the function for use in the other file:

export default indexAndStorage;
Copy after login

Implementing Query Module

A diagram for visual understanding

Guide to Agentic RAG Using LlamaIndex TypeScript

Now, create a file name query-paul.ts in the project root.

Importing Packages

import {
  Settings,
  storageContextFromDefaults,
  VectorStoreIndex,
} from "llamaindex";
import constant from "./constant";
import { Ollama, OllamaEmbedding } from "@llamaindex/ollama";
import { agent } from "llamaindex";
Copy after login

Creating and setting the models are the same as above.

Implementing Load and Query

Now implementing the loadAndQuery function

async function loadAndQuery(query: string) {
  try {
    // load the stored index from persistent storage
    const storageContext = await storageContextFromDefaults({
      persistDir: constant.STORAGE_DIR,
    });

    /// load the existing index
    const index = await VectorStoreIndex.init({ storageContext });

    // create a retriever and query engine
    const retriever = index.asRetriever();
    const queryEngine = index.asQueryEngine({ retriever });

    const tools = [
      index.queryTool({
        metadata: {
          name: "paul_graham_essay_tool",
          description: `This tool can answer detailed questions about the essay by Paul Graham.`,
        },
      }),
    ];
    const ragAgent = agent({ tools });

    // query the stored embeddings
    const response = await queryEngine.query({ query });
    let toolResponse = await ragAgent.run(query);

    console.log("Response: ", response.message);
    console.log("Tool Response: ", toolResponse);
  } catch (error) {
    console.log("Error during retrieval: ", error);
  }
}
Copy after login

In the above code, setting the storage context from the STROAGE_DIR, then using VectorStoreIndex.init() method we will load the already indexed files from STROAGE_DIR.

After loading we will create a retriever and query engine from that retriever. and now as we have learned previously we will create and tool that will answer the question from indexed files. Now, add that tool to the agent named ragAgent.

Then we will query the indexed essay using two methods one from the query engine and the other from the agent and log the response to the terminal.

Exporting the function:

export default loadAndQuery;
Copy after login

It is time to put all the modules together in a single app file for easy execution.

Implementing App.ts

Create an app.ts file

import indexAndStorage from "./load-index";
import loadAndQuery from "./query-paul";

function main(query: string) {
  console.log("======================================");
  console.log("Data Indexing....");
  indexAndStorage();
  console.log("Data Indexing Completed!");
  console.log("Please, Wait to get your response or SUBSCRIBE!");
  loadAndQuery(query);
}
const query = "What is Life?";
main(query);
Copy after login

Here, we will import all the modules, execute them serially, and run.

Running the Application

$ npx tsx ./app.ts
Copy after login

when it runs the first time three things will happen.

  • It will ask for installing tsx, please install it.
  • It will take time to embed the document depending on your systems(one time).
  • Then it will give back the response.

First time running (Similar to it)

Guide to Agentic RAG Using LlamaIndex TypeScript

Without the agent, the response will be similar to it not exact.

Guide to Agentic RAG Using LlamaIndex TypeScript

With Agents

Guide to Agentic RAG Using LlamaIndex TypeScript

Guide to Agentic RAG Using LlamaIndex TypeScript

That’s all for today. I hope this article will help you learn and understand the workflow with TypeScript.

Project code repository here.

Conclusion

This is a simple yet functional Agentic RAG Using LlamaIndex TypeScript system. With this article, I want to give you a taste of another language besides Python for building Agentic RAG Using LlamaIndex TypeScript or any other LLM AI-based application. The Agentic RAG system represents a powerful evolution beyond basic RAG implementation, allowing for more intelligent, flexible responses to user queries. Using LlamaIndex with TypeScript, you can build such a system in a Type-safe, maintainable way that integrates well with the web application ecosystem.

Key Takeaways

  • Typescript LlamaIndex provides a robust foundation for building RAG systems.
  • Persistent storage of embeddings improves efficiency for repeated queries.
  • Agentic approaches enable more intelligent tool selection based on query content.
  • Local model execution with Ollama offers privacy and cost advantages.
  • Specialized tools can address different aspects of the domain knowledge.
  • The Agentic RAG Using LlamaIndex TypeScript enhances retrieval-augmented generation by enabling intelligent, dynamic responses.

Frequently Asked Questions

Q1. How can I extend this system to handle multiple document sources?

A. You can modify the indexing function to load documents from multiple files or data sources and pass an array of document objects to the VectorStoreIndex method.

Q2. Does this approach work with other LLM providers besides Ollama?

A. Yes! LlamaIndex supports various LLM providers including OpenAI, Antropic, and others. You can replace the Ollama setup with any supported provider.

Q3. How can I improve the quality of responses for domain-specific questions?

A. Consider fine-tuning your embedding model on domain-specific data or implementing custom retrieval strategies that prioritize certain document sections based on your specific use case.

Q4. What is the difference between direct querying and using the agent approach?

A. Direct querying simply retrieves relevant content and generates a response, while the agent approached first decides which tool is most appropriate for the query, potentially combining information from multiple sources or using specialized processing for different query types.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

The above is the detailed content of Guide to Agentic RAG Using LlamaIndex TypeScript. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Roblox: Bubble Gum Simulator Infinity - How To Get And Use Royal Keys
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Nordhold: Fusion System, Explained
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Mandragora: Whispers Of The Witch Tree - How To Unlock The Grappling Hook
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

Java Tutorial
1664
14
PHP Tutorial
1269
29
C# Tutorial
1249
24
10 Generative AI Coding Extensions in VS Code You Must Explore 10 Generative AI Coding Extensions in VS Code You Must Explore Apr 13, 2025 am 01:14 AM

Hey there, Coding ninja! What coding-related tasks do you have planned for the day? Before you dive further into this blog, I want you to think about all your coding-related woes—better list those down. Done? – Let&#8217

GPT-4o vs OpenAI o1: Is the New OpenAI Model Worth the Hype? GPT-4o vs OpenAI o1: Is the New OpenAI Model Worth the Hype? Apr 13, 2025 am 10:18 AM

Introduction OpenAI has released its new model based on the much-anticipated “strawberry” architecture. This innovative model, known as o1, enhances reasoning capabilities, allowing it to think through problems mor

A Comprehensive Guide to Vision Language Models (VLMs) A Comprehensive Guide to Vision Language Models (VLMs) Apr 12, 2025 am 11:58 AM

Introduction Imagine walking through an art gallery, surrounded by vivid paintings and sculptures. Now, what if you could ask each piece a question and get a meaningful answer? You might ask, “What story are you telling?

Pixtral-12B: Mistral AI's First Multimodal Model - Analytics Vidhya Pixtral-12B: Mistral AI's First Multimodal Model - Analytics Vidhya Apr 13, 2025 am 11:20 AM

Introduction Mistral has released its very first multimodal model, namely the Pixtral-12B-2409. This model is built upon Mistral’s 12 Billion parameter, Nemo 12B. What sets this model apart? It can now take both images and tex

How to Add a Column in SQL? - Analytics Vidhya How to Add a Column in SQL? - Analytics Vidhya Apr 17, 2025 am 11:43 AM

SQL's ALTER TABLE Statement: Dynamically Adding Columns to Your Database In data management, SQL's adaptability is crucial. Need to adjust your database structure on the fly? The ALTER TABLE statement is your solution. This guide details adding colu

Beyond The Llama Drama: 4 New Benchmarks For Large Language Models Beyond The Llama Drama: 4 New Benchmarks For Large Language Models Apr 14, 2025 am 11:09 AM

Troubled Benchmarks: A Llama Case Study In early April 2025, Meta unveiled its Llama 4 suite of models, boasting impressive performance metrics that positioned them favorably against competitors like GPT-4o and Claude 3.5 Sonnet. Central to the launc

How to Build MultiModal AI Agents Using Agno Framework? How to Build MultiModal AI Agents Using Agno Framework? Apr 23, 2025 am 11:30 AM

While working on Agentic AI, developers often find themselves navigating the trade-offs between speed, flexibility, and resource efficiency. I have been exploring the Agentic AI framework and came across Agno (earlier it was Phi-

How ADHD Games, Health Tools & AI Chatbots Are Transforming Global Health How ADHD Games, Health Tools & AI Chatbots Are Transforming Global Health Apr 14, 2025 am 11:27 AM

Can a video game ease anxiety, build focus, or support a child with ADHD? As healthcare challenges surge globally — especially among youth — innovators are turning to an unlikely tool: video games. Now one of the world’s largest entertainment indus

See all articles