Table of Contents
Learning Objectives
Table of contents
How AI Agents Work Together?
Implementation of Concurrent Query Resolution System
Step 1: Setting the API Key
Step 2: Importing Required Libraries
Step 3: Initializing LLMs
Step 4: Defining AI Agents
Query Resolution Agent
Summary Agent
What Happens Here?
Step 5: Defining Tasks
Why This Matters
Step 6: Executing a Query with AI Agents
Step 7: Handling Multiple Queries Concurrently
Step 8: Defining Example Queries
Step 9: Setting Up the Event Loop
Step 10: Handling Event Loops in Jupyter Notebook/Google Colab
Step 11: Executing Queries and Printing Results
Advantages of Concurrent Query Resolution System
Applications of Concurrent Query Resolution System
Conclusion
Key Takeaways
Frequently Asked Questions
Home Technology peripherals AI Concurrent Query Resolution System Using CrewAI

Concurrent Query Resolution System Using CrewAI

Mar 03, 2025 pm 07:00 PM

In the era of artificial intelligence, businesses are constantly seeking innovative ways to enhance customer support services. One such approach is leveraging AI agents that work collaboratively to resolve customer queries efficiently. This article explores the implementation of a Concurrent Query Resolution System using CrewAI, OpenAI’s GPT models, and Google Gemini. This system employs multiple specialized agents that operate in parallel to handle customer queries seamlessly, reducing response time and improving accuracy.

Learning Objectives

  • Understand how AI agents can efficiently handle customer queries by automating responses and summarizing key information.
  • Learn how CrewAI enables multi-agent collaboration to improve customer support workflows.
  • Explore different types of AI agents, such as query resolvers and summarizers, and their roles in customer service automation.
  • Implement concurrent query processing using Python’s asyncio to enhance response efficiency.
  • Optimize customer support systems by integrating AI-driven automation for improved accuracy and scalability.

This article was published as a part of theData Science Blogathon.

Table of contents

  • How AI Agents Work Together?
  • Implementation of Concurrent Query Resolution System
    • Step 1: Setting the API Key
    • Step 2: Importing Required Libraries
    • Step 3: Initializing LLMs
    • Step 4: Defining AI Agents
    • Step 5: Defining Tasks
    • Step 6: Executing a Query with AI Agents
    • Step 7: Handling Multiple Queries Concurrently
    • Step 8: Defining Example Queries
    • Step 9: Setting Up the Event Loop
    • Step 10: Handling Event Loops in Jupyter Notebook/Google Colab
    • Step 11: Executing Queries and Printing Results
  • Advantages of Concurrent Query Resolution System
  • Applications of Concurrent Query Resolution System
  • Conclusion
  • Frequently Asked Questions

How AI Agents Work Together?

The Concurrent Query Resolution System uses a multi-agent framework, assigning each agent a specific role. The system utilizes CrewAI, a framework that enables AI agents to collaborate effectively.

The primary components of the system include:

  • Query Resolution Agent: Responsible for understanding customer queries and providing accurate responses.
  • Summary Agent: Summarizes the resolution process for quick review and future reference.
  • LLMs (Large Language Models): Includes models like GPT-4o and Gemini, each with different configurations to balance speed and accuracy.
  • Task Management: Assigning tasks dynamically to agents to ensure concurrent query processing.

Implementation of Concurrent Query Resolution System

To transform the AI agent framework from concept to reality, a structured implementation approach is essential. Below, we outline the key steps involved in setting up and integrating AI agents for effective query resolution.

Step 1: Setting the API Key

The OpenAI API key is stored as an environment variable using the os module. This allows the system to authenticate API requests securely without hardcoding sensitive credentials.

import os 

# Set the API key as an environment variable
os.environ["OPENAI_API_KEY"] = ""
Copy after login
Copy after login
Copy after login
Copy after login

The system uses the os module to interact with the operating system.

The system sets the OPENAI_API_KEY as an environment variable, allowing it to authenticate requests to OpenAI’s API.

Step 2: Importing Required Libraries

Necessary libraries are imported, including asyncio for handling asynchronous operations and crewai components like Agent, Crew, Task, and LLM. These are essential for defining and managing AI agents.

import asyncio
from crewai import Agent, Crew, Task, LLM, Process
import google.generativeai as genai
Copy after login
Copy after login
Copy after login
Copy after login
  • asyncio: Python’s built-in module for asynchronous programming, enabling concurrent execution.
  • Agent: Represents an AI worker with specific responsibilities.
  • Crew: Manages multiple agents and their interactions.
  • Task: Defines what each agent is supposed to do.
  • LLM: Specifies the large language model used.
  • Process: It defines how tasks execute, whether sequentially or in parallel.
  • google.generativeai: Library for working with Google’s generative AI models (not used in this snippet, but likely included for future expansion).

Step 3: Initializing LLMs

Three different LLM instances (GPT-4o and GPT-4) are initialized with varying temperature settings. The temperature controls response creativity, ensuring a balance between accuracy and flexibility in AI-generated answers.

# Initialize the LLM with Gemini
llm_1 = LLM(
    model="gpt-4o",
    temperature=0.7)
llm_2 = LLM(
    model="gpt-4",
    temperature=0.2)
llm_3 = LLM(
    model="gpt-4o",
    temperature=0.3)
Copy after login
Copy after login
Copy after login
Copy after login

The system creates three LLM instances, each with a different configuration.

Parameters:

  • model: Specifies which OpenAI model to use (gpt-4o or gpt-4).
  • temperature: Controls randomness in responses (0 = deterministic, 1 = more creative).

These different models and temperatures help balance accuracy and creativity

Step 4: Defining AI Agents

Each agent has a specific role and predefined goals. Two AI agents are created:

  • Query Resolver: Handles customer inquiries and provides detailed responses.
  • Summary Generator: Summarizes the resolutions for quick reference.
    Each agent has a defined role, goal, and backstory to guide its interactions.

Query Resolution Agent

import os 

# Set the API key as an environment variable
os.environ["OPENAI_API_KEY"] = ""
Copy after login
Copy after login
Copy after login
Copy after login

Let’s see what’s happening in this code block

  • Agent Creation: The query_resolution_agent is an AI-powered assistant responsible for resolving customer queries.
  • Model Selection: It uses llm_1, configured as GPT-4o with a temperature of 0.7. This balance allows for creative yet accurate responses.
  • Role: The system designates the agent as a Query Resolver.
  • Backstory: The developers program the agent to act as a professional customer service assistant, ensuring efficient and professional responses.
  • Goal: To provide accurate solutions to user queries.
  • Verbose Mode: verbose=True ensures detailed logs, helping developers debug and track its performance.

Summary Agent

import asyncio
from crewai import Agent, Crew, Task, LLM, Process
import google.generativeai as genai
Copy after login
Copy after login
Copy after login
Copy after login

What Happens Here?

  • Agent Creation: The summary_agent is designed to summarize query resolutions.
  • Model Selection: Uses llm_2 (GPT-4) with a temperature of 0.2, making its responses more deterministic and precise.
  • Role: This agent acts as a Summary Generator.
  • Backstory: It summarizes query resolutions concisely for quick reference.
  • Goal: It provides a clear and concise summary of how customer queries were resolved.
  • Verbose Mode: verbose=True ensures that debugging information is available if needed.

Step 5: Defining Tasks

The system dynamically assigns tasks to ensure parallel query processing.

This section defines tasks assigned to AI agents in the Concurrent Query Resolution System.

# Initialize the LLM with Gemini
llm_1 = LLM(
    model="gpt-4o",
    temperature=0.7)
llm_2 = LLM(
    model="gpt-4",
    temperature=0.2)
llm_3 = LLM(
    model="gpt-4o",
    temperature=0.3)
Copy after login
Copy after login
Copy after login
Copy after login

What Happens Here?

Defining Tasks:

  • resolution_task: This task instructs the Query Resolver Agent to analyze and resolve customer queries.
  • summary_task: This task instructs the Summary Agent to generate a brief summary of the resolution process.

Dynamic Query Handling:

  • The system replaces {query} with an actual customer query when executing the task.
  • This allows the system to handle any customer query dynamically.

Expected Output:

  • The resolution_task expects a detailed response to the query.
  • The summary_task generates a concise summary of the query resolution.

Agent Assignment:

  • The query_resolution_agent is assigned to handle resolution tasks.
  • The summary_agent is assigned to handle summarization tasks.

Why This Matters

  • Task Specialization: Each AI agent has a specific job, ensuring efficiency and clarity.
  • Scalability: You can add more tasks and agents to handle different types of customer support interactions.
  • Parallel Processing: Tasks can be executed concurrently, reducing customer wait times.

Step 6: Executing a Query with AI Agents

An asynchronous function is created to process a query. The Crew class organizes agents and tasks, executing them sequentially to ensure proper query resolution and summarization.

import os 

# Set the API key as an environment variable
os.environ["OPENAI_API_KEY"] = ""
Copy after login
Copy after login
Copy after login
Copy after login

This function defines an asynchronous process to execute a query. It creates a Crew instance, which includes:

  • agents: The AI agents involved in the process (Query Resolver and Summary Generator).
  • tasks: Tasks assigned to the agents (query resolution and summarization).
  • process=Process.sequential: Ensures tasks are executed in sequence.
  • verbose=True: Enables detailed logging for better tracking.

The function uses await to execute the AI agents asynchronously and returns the result.

Step 7: Handling Multiple Queries Concurrently

Using asyncio.gather(), multiple queries can be processed simultaneously. This reduces response time by allowing AI agents to handle different customer issues in parallel.

import asyncio
from crewai import Agent, Crew, Task, LLM, Process
import google.generativeai as genai
Copy after login
Copy after login
Copy after login
Copy after login

This function executes two queries concurrently. asyncio.gather() processes both queries simultaneously, significantly reducing response time. The function returns the results of both queries once execution is complete

Step 8: Defining Example Queries

Developers define sample queries to test the system, covering common customer support issues like login failures and payment processing errors.

# Initialize the LLM with Gemini
llm_1 = LLM(
    model="gpt-4o",
    temperature=0.7)
llm_2 = LLM(
    model="gpt-4",
    temperature=0.2)
llm_3 = LLM(
    model="gpt-4o",
    temperature=0.3)
Copy after login
Copy after login
Copy after login
Copy after login

These are sample queries to test the system.

Query 1 deals with login issues, while Query 2 relates to payment gateway errors.

Step 9: Setting Up the Event Loop

The system initializes an event loop to handle asynchronous operations. If it doesn’t find an existing loop, it creates a new one to manage AI task execution.

import os 

# Set the API key as an environment variable
os.environ["OPENAI_API_KEY"] = ""
Copy after login
Copy after login
Copy after login
Copy after login

This section ensures that an event loop is available for running asynchronous tasks.

If the system detects no event loop (RuntimeError occurs), it creates a new one and sets it as the active loop.

Step 10: Handling Event Loops in Jupyter Notebook/Google Colab

Since Jupyter and Colab have pre-existing event loops, nest_asyncio.apply() is used to prevent conflicts, ensuring smooth execution of asynchronous queries.

import asyncio
from crewai import Agent, Crew, Task, LLM, Process
import google.generativeai as genai
Copy after login
Copy after login
Copy after login
Copy after login

Jupyter Notebooks and Google Colab have pre-existing event loops, which can cause errors when running async functions.

nest_asyncio.apply() allows nested event loops, resolving compatibility issues.

Step 11: Executing Queries and Printing Results

The event loop runs handle_two_queries() to process queries concurrently. The system prints the final AI-generated responses, displaying query resolutions and summaries.

# Initialize the LLM with Gemini
llm_1 = LLM(
    model="gpt-4o",
    temperature=0.7)
llm_2 = LLM(
    model="gpt-4",
    temperature=0.2)
llm_3 = LLM(
    model="gpt-4o",
    temperature=0.3)
Copy after login
Copy after login
Copy after login
Copy after login

loop.run_until_complete() starts the execution of handle_two_queries(), which processes both queries concurrently.

The system prints the results, displaying the AI-generated resolutions for each query.

Concurrent Query Resolution System Using CrewAI

Concurrent Query Resolution System Using CrewAI

Advantages of Concurrent Query Resolution System

Below, we will see how the Concurrent Query Resolution System enhances efficiency by processing multiple queries simultaneously, leading to faster response times and improved user experience.

  • Faster Response Time: Parallel execution resolves multiple queries simultaneously.
  • Improved Accuracy: Leveraging multiple LLMs ensures a balance between creativity and factual correctness.
  • Scalability: The system can handle a high volume of queries without human intervention.
  • Better Customer Experience: Automated summaries provide a quick overview of query resolutions.

Applications of Concurrent Query Resolution System

We will now explore the various applications of the Concurrent Query Resolution System, including customer support automation, real-time query handling in chatbots, and efficient processing of large-scale service requests.

  • Customer Support Automation: Enables AI-driven chatbots to resolve multiple customer queries simultaneously, reducing response time.
  • Real-Time Query Processing: Enhances live support systems by handling numerous queries in parallel, improving efficiency.
  • E-commerce Assistance: Streamlines product inquiries, order tracking, and payment issue resolutions in online shopping platforms.
  • IT Helpdesk Management: Supports IT service desks by diagnosing and resolving multiple technical issues concurrently.
  • Healthcare & Telemedicine: Assists in managing patient inquiries, appointment scheduling, and medical advice simultaneously.

Conclusion

The Concurrent Query Resolution System demonstrates how AI-driven multi-agent collaboration can revolutionize customer support. By leveraging CrewAI, OpenAI’s GPT models, and Google Gemini, businesses can automate query handling, improving efficiency and user satisfaction. This approach paves the way for more advanced AI-driven service solutions in the future.

Key Takeaways

  • AI agents streamline customer support, reducing response times.
  • CrewAI enables specialized agents to work together effectively.
  • Using asyncio, multiple queries are handled concurrently.
  • Different LLM configurations balance accuracy and creativity.
  • The system can manage high query volumes without human intervention.
  • Automated summaries provide quick, clear query resolutions.

Frequently Asked Questions

Q1. What is CrewAI?

A. CrewAI is a framework that allows multiple AI agents to work collaboratively on complex tasks. It enables task management, role specialization, and seamless coordination among agents.

Q2. How does CrewAI work?

A. CrewAI defines agents with specific roles, assigns tasks dynamically, and processes them either sequentially or concurrently. It leverages AI models like OpenAI’s GPT and Google Gemini to execute tasks efficiently.

Q3. How does CrewAI handle multiple queries simultaneously?

A. CrewAI uses Python’s asyncio.gather() to run multiple tasks concurrently, ensuring faster query resolution without performance bottlenecks.

Q4. Can CrewAI integrate with different LLMs?

A. Yes, CrewAI supports various large language models (LLMs), including OpenAI’s GPT-4, GPT-4o, and Google’s Gemini, allowing users to choose based on speed and accuracy requirements.

Q5. How does CrewAI ensure task accuracy?

A. By using different AI models with varied temperature settings, CrewAI balances creativity and factual correctness, ensuring reliable responses.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

The above is the detailed content of Concurrent Query Resolution System Using CrewAI. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

Java Tutorial
1653
14
PHP Tutorial
1251
29
C# Tutorial
1224
24
Getting Started With Meta Llama 3.2 - Analytics Vidhya Getting Started With Meta Llama 3.2 - Analytics Vidhya Apr 11, 2025 pm 12:04 PM

Meta's Llama 3.2: A Leap Forward in Multimodal and Mobile AI Meta recently unveiled Llama 3.2, a significant advancement in AI featuring powerful vision capabilities and lightweight text models optimized for mobile devices. Building on the success o

10 Generative AI Coding Extensions in VS Code You Must Explore 10 Generative AI Coding Extensions in VS Code You Must Explore Apr 13, 2025 am 01:14 AM

Hey there, Coding ninja! What coding-related tasks do you have planned for the day? Before you dive further into this blog, I want you to think about all your coding-related woes—better list those down. Done? – Let&#8217

Selling AI Strategy To Employees: Shopify CEO's Manifesto Selling AI Strategy To Employees: Shopify CEO's Manifesto Apr 10, 2025 am 11:19 AM

Shopify CEO Tobi Lütke's recent memo boldly declares AI proficiency a fundamental expectation for every employee, marking a significant cultural shift within the company. This isn't a fleeting trend; it's a new operational paradigm integrated into p

AV Bytes: Meta's Llama 3.2, Google's Gemini 1.5, and More AV Bytes: Meta's Llama 3.2, Google's Gemini 1.5, and More Apr 11, 2025 pm 12:01 PM

This week's AI landscape: A whirlwind of advancements, ethical considerations, and regulatory debates. Major players like OpenAI, Google, Meta, and Microsoft have unleashed a torrent of updates, from groundbreaking new models to crucial shifts in le

GPT-4o vs OpenAI o1: Is the New OpenAI Model Worth the Hype? GPT-4o vs OpenAI o1: Is the New OpenAI Model Worth the Hype? Apr 13, 2025 am 10:18 AM

Introduction OpenAI has released its new model based on the much-anticipated “strawberry” architecture. This innovative model, known as o1, enhances reasoning capabilities, allowing it to think through problems mor

A Comprehensive Guide to Vision Language Models (VLMs) A Comprehensive Guide to Vision Language Models (VLMs) Apr 12, 2025 am 11:58 AM

Introduction Imagine walking through an art gallery, surrounded by vivid paintings and sculptures. Now, what if you could ask each piece a question and get a meaningful answer? You might ask, “What story are you telling?

How to Add a Column in SQL? - Analytics Vidhya How to Add a Column in SQL? - Analytics Vidhya Apr 17, 2025 am 11:43 AM

SQL's ALTER TABLE Statement: Dynamically Adding Columns to Your Database In data management, SQL's adaptability is crucial. Need to adjust your database structure on the fly? The ALTER TABLE statement is your solution. This guide details adding colu

Reading The AI Index 2025: Is AI Your Friend, Foe, Or Co-Pilot? Reading The AI Index 2025: Is AI Your Friend, Foe, Or Co-Pilot? Apr 11, 2025 pm 12:13 PM

The 2025 Artificial Intelligence Index Report released by the Stanford University Institute for Human-Oriented Artificial Intelligence provides a good overview of the ongoing artificial intelligence revolution. Let’s interpret it in four simple concepts: cognition (understand what is happening), appreciation (seeing benefits), acceptance (face challenges), and responsibility (find our responsibilities). Cognition: Artificial intelligence is everywhere and is developing rapidly We need to be keenly aware of how quickly artificial intelligence is developing and spreading. Artificial intelligence systems are constantly improving, achieving excellent results in math and complex thinking tests, and just a year ago they failed miserably in these tests. Imagine AI solving complex coding problems or graduate-level scientific problems – since 2023

See all articles