Table of contents
- From Single-Step to Multi-Tool: Addressing Complex Queries
- Introducing Multi-Tool Agentic RAG: Orchestrated Information Retrieval
- Is "Multi-Step" the Right Term?
- Architecture of a Multi-Tool Agentic RAG System
- Hands-on with Multi-Tool Agentic RAG for Healthcare using Azure OpenAI
- Bing Search API vs. Bing Grounding Tool: A Key Improvement
- Prerequisites
- Step 1: Setup and Imports
- Step 2: Configure Azure Services
- Step 3: Define Tool Functions (SQL DB, Vector Search, Web Search)
- 3.1 Unstructured Data Tool: ACC Guidelines Search via Azure AI Search
- 3.2 Web Data Tool: Bing Web Grounding via Azure AI Agent Service
- 3.3 Structured Data Tool: Patient Data Query via Azure SQL
- Step 4: Define Tools for OpenAI Function Calling
- Step 5: Define System Prompt for the Agent
- Step 6: Implement the Multi-Step Agent
- Step 7: Testing the Multi-Tool Agentic RAG
- Example 1: ACC Guidelines for Hypertension Therapy
- Example 2: Recent Updates on Anticoagulant Therapies
- Example 3: Requesting Information from Patient Database
- Example 4: A Real-World Medical Doctor Task
- Conclusion: Orchestrating Knowledge for Smarter Agents
Welcome back to my "Mastering Agentic RAG" series! In Part 1: Single-Step Routing, we explored how to build AI agents that intelligently route queries to a single best-suited knowledge source using Azure OpenAI and Azure AI Search. We built a foundational Agentic RAG Router (Single-Step), enabling agents to choose between private knowledge (Azure AI Search) and public web data (Bing Search).
But what happens when a single tool isn't enough? Real-world questions are often complex, requiring information from multiple sources and different types of knowledge. This is where Multi-Tool Agentic RAG architectures become essential.
In Part 2, we're leveling up! We'll dive into how to build AI agents that can orchestrate multiple tools to answer complex queries requiring diverse information. We'll move beyond the single-step approach and explore how to design agents that can intelligently leverage a combination of tools โ like databases, private knowledge bases, and the vast expanse of the web โ all within a single query interaction.
We'll use a compelling real-world example in healthcare, showcasing how a real medical doctor can leverage a Multi-Tool Agentic RAG system to access personalized patient data, medical guidelines, and up-to-date medical facts to provide more informed and effective patient care. And, of course, we'll continue to leverage the power of Azure OpenAI function calling and Azure AI services to bring this architecture to life, with a notable upgrade from the Bing Search API to the more robust Bing Grounding Tool.
๐ก If you want to learn more about the initial proof-of-concept of this Agent Use Case, check out the original partnership blog, Building AI Agents for Healthcare Decisions.
From Single-Step to Multi-Tool: Addressing Complex Queries
In Part 1, our Single-Step Agentic RAG Router excelled at directing queries to the single most relevant tool. However, this architecture has limitations when faced with questions that require information from diverse sources.
Consider the below scenario:
"What are the recommended treatment guidelines for type 2 diabetes patients over 65 with a history of heart disease, considering recent studies on new medications?" This query requires:
Patient-specific data: Age, medical history (diabetes, heart disease) - likely stored in a database (SQL DB).
Established medical guidelines: Treatment protocols for type 2 diabetes - found in a private knowledge base (Vector Store of Medical Guidelines).
Recent medical research: Information on new medications for diabetes and heart disease - best accessed via web search (Bing Search).
These queries demand an agent capable of orchestrating multiple information retrieval steps and synthesizing data from different tools. A Single-Step Router, limited to choosing just one tool, falls short in these scenarios.
Introducing Multi-Tool Agentic RAG: Orchestrated Information Retrieval
To overcome these limitations, we introduce the Multi-Tool Agentic RAG architecture. Instead of routing a query to a single tool, this architecture empowers the AI agent to:
Analyze the User Query: Understand the information needs and identify the different types of knowledge required to answer the query comprehensively.
Select and Invoke Multiple Tools: Intelligently choose and execute multiple tools appropriate for each aspect of the query. This might involve:
Querying a SQL database to retrieve structured patient data.
Searching a Vector Store for relevant medical guidelines.
Performing a web search for the latest research or general information.
Aggregate and Synthesize Results: Collect the information retrieved from each tool and synthesize it into a unified, coherent, and comprehensive response for the user.
Is "Multi-Step" the Right Term?
While "Multi-Step" can describe this process, "Multi-Tool Agentic RAG" or "Orchestrated Agentic RAG" are often more precise and widely used terms. They emphasize the core capability: orchestrating multiple tools for a single user query. We'll primarily use "Multi-Tool Agentic RAG" in this blog post for clarity.
Architecture of a Multi-Tool Agentic RAG System
Let's visualize the architecture of a Multi-Tool Agentic RAG system, tailored for our healthcare use case:
A sequence diagram depicting the sequence of agent interactions:
A user flow diagram for simplicity:
Key Architectural Differences from Single-Step:
Advanced Retrieval Agent (Planner): The Retrieval Agent is now more sophisticated. It acts as a planner, analyzing the query to determine which tools to use and how to orchestrate them.
Multiple Routers & Tools: Instead of a single Router, we have individual Routers for each Tool (SQL DB Router, Vector Search Router, Web Search Router). This modular design allows the agent to manage and invoke each tool independently.
Parallel or Sequential Tool Execution: The agent can be designed to execute tools in parallel (e.g., start all searches simultaneously) or sequentially (e.g., use web search only after consulting the database and vector store).
LLM Synthesis is Critical: The LLM's role in synthesis becomes even more important. It needs to intelligently combine and reconcile information from diverse sources to generate a cohesive and accurate answer.
Hands-on with Multi-Tool Agentic RAG for Healthcare using Azure OpenAI
Let's build a Multi-Tool Agentic RAG system for our healthcare scenario. Our AI agent will assist doctors by answering complex questions about patient care, leveraging:
SQL Database (Patient History): This time, we'll use a real SQL Database to store patient demographics, medical history, and lab results. You can use Azure SQL Database or any SQL database you prefer. To help you get started and simulate a real-world patient database, I've created a Jupyter Notebook (
fake_patient_data.ipynb
) that you can use to generate mock patient data and populate your SQL database. You can find it here: azure-ai-agents-playground/samples/05-AGENTIC-RAG-QUERY-PLANNING/fake_patient_data.ipynb at main ยท farzad528/azure-ai-agents-playground.Azure AI Search (Medical Guidelines): Representing a private knowledge base of medical guidelines and protocols indexed in Azure AI Search.
Azure AI Agent Service with Bing Grounding Tool (Up-to-date Medical Facts): Accessing the web for recent medical research, drug information, or emerging health trends.
We will again use Azure OpenAI function calling to orchestrate these tools.
๐ฉโ๐ป Follow along with the Full Code here: azure-ai-agents-playground/samples/05-AGENTIC-RAG-QUERY-PLANNING/multi-step-agentic-rag.ipynb at main ยท farzad528/azure-ai-agents-playground
Bing Search API vs. Bing Grounding Tool: A Key Improvement
In previous posts, I used the Bing Search API directly for web searches. In this updated version, we've switched to the Bing Grounding Tool via Azure AI Agent Service. This represents a significant improvement for several reasons:
Better Context and Grounding: The Azure AI Agent Service provides a more sophisticated interaction with Bing, ensuring better context-awareness and grounding of search results.
Compliance and Terms of Use: Using the Bing Grounding Tool through Azure AI Agent Service ensures compliance with Bing's Terms of Use, making it a more sustainable and officially supported approach.
Integration with Azure Ecosystem: This approach leverages the full Azure AI stack, providing better integration with other Azure services and a more unified development experience.
Enhanced Search Quality: The Bing Grounding Tool provides more refined and relevant search results, tailored specifically for AI agent scenarios.
The switch to Bing Grounding Tool is part of Microsoft's broader strategy to provide more specialized, AI-friendly interfaces to their search capabilities, offering developers a more robust solution than direct API calls.
Prerequisites
Ensure you have the same prerequisites as Part 1 (Azure OpenAI, Azure AI Search), but with these additional requirements:
Azure SQL Database (or your preferred SQL DB): Set up your SQL database and use the
fake_patient_data.ipynb
notebook to populate it with sample patient data. Note your SQL Database connection string and credentials.Azure AI Search Index with Medical Guidelines: You'll need an Azure AI Search index populated with medical guidelines.
Azure AI Agent Service: Access to Azure AI Agent Service for the Bing Grounding Tool instead of the Bing Search API we used in Part 1.
Step 1: Setup and Imports
First, we set up our Python environment by importing necessary libraries and configuring access to Azure services, Azure AI Agent Service with Bing Grounding Tool, and our SQL Database.
import os
import json
import pandas as pd
import pyodbc
import sqlalchemy
from azure.ai.projects import AIProjectClient
from azure.ai.projects.models import BingGroundingTool
from azure.core.credentials import AzureKeyCredential
from azure.identity import DefaultAzureCredential
from azure.search.documents import SearchClient
from azure.search.documents.models import VectorizableTextQuery
from dotenv import load_dotenv
from openai import AzureOpenAI
from rich.console import Console
from rich.panel import Panel
load_dotenv()
console = Console()
pip install -r requirements.txt
Step 2: Configure Azure Services
Next, we set up all the necessary configurations for the Azure services we'll be using.
# Azure OpenAI configuration
AZURE_OPENAI_API_KEY = os.getenv("AZURE_OPENAI_API_KEY", "your-azure-openai-api-key")
AZURE_OPENAI_API_VERSION = os.getenv("AZURE_OPENAI_API_VERSION", "2024-10-21")
AZURE_OPENAI_ENDPOINT = os.getenv("AZURE_OPENAI_ENDPOINT", "https://your-azure-openai-endpoint.openai.azure.com/")
AZURE_OPENAI_CHAT_COMPLETION_DEPLOYED_MODEL_NAME = os.getenv("AZURE_OPENAI_CHAT_COMPLETION_DEPLOYED_MODEL_NAME", "gpt-4o")
# Azure AI Search configuration
AZURE_SEARCH_ENDPOINT = os.getenv("AZURE_SEARCH_SERVICE_ENDPOINT", "https://your-search-service.search.windows.net")
AZURE_SEARCH_KEY = os.getenv("AZURE_SEARCH_ADMIN_KEY", "your-azure-search-key")
SEARCH_INDEX_NAME = "acc-guidelines-index"
# Azure AI Project configuration for Bing Grounding
AZURE_CONNECTION_STRING = os.getenv("AZURE_CONNECTION_STRING", "your-azure-connection-string")
BING_CONNECTION_NAME = os.getenv("BING_CONNECTION_NAME", "fsunavalabinggrounding")
# Azure SQL connection details (for patient data)
server = os.getenv("AZURE_SQL_SERVER_NAME")
database = os.getenv("AZURE_SQL_DATABASE_NAME")
username = os.getenv("AZURE_SQL_USER_NAME")
password = os.getenv("AZURE_SQL_PASSWORD")
driver = '{ODBC Driver 17 for SQL Server}'
# Create an Azure SQL connection string
AZURE_SQL_CONNECTION_STRING = f"DRIVER={driver};SERVER={server};DATABASE={database};UID={username};PWD={password}"
# Initialize the Azure OpenAI client
openai_client = AzureOpenAI(
api_key=AZURE_OPENAI_API_KEY,
api_version=AZURE_OPENAI_API_VERSION,
azure_endpoint=AZURE_OPENAI_ENDPOINT,
)
Step 3: Define Tool Functions (SQL DB, Vector Search, Web Search)
Now we'll implement the three tool functions that our agent will use to access different data sources.
3.1 Unstructured Data Tool: ACC Guidelines Search via Azure AI Search
def search_acc_guidelines(query: str) -> str:
"""
Searches the Azure AI Search index 'acc-guidelines-index'
for relevant American College of Cardiology (ACC) guidelines.
"""
credential = AzureKeyCredential(AZURE_SEARCH_KEY)
client = SearchClient(
endpoint=AZURE_SEARCH_ENDPOINT,
index_name=SEARCH_INDEX_NAME,
credential=credential,
)
results = client.search(
search_text=query,
vector_queries=[
VectorizableTextQuery(
text=query,
k_nearest_neighbors=10, # Adjust as needed
fields="embedding" # Adjust based on your index schema
)
],
query_type="semantic",
semantic_configuration_name="default",
search_fields=["chunk"],
top=10,
include_total_count=True
)
retrieved_texts = []
for result in results:
content_chunk = result.get("chunk", "")
retrieved_texts.append(content_chunk)
context_str = "\n".join(retrieved_texts) if retrieved_texts else "No relevant guidelines found."
console.print(
Panel(
f"Tool Invoked: ACC Guidelines Search\nQuery: {query}",
style="bold yellow"
)
)
return context_str
3.2 Web Data Tool: Bing Web Grounding via Azure AI Agent Service
This is a significant improvement over the previous version's direct Bing Search API calls:
def search_bing_grounding(query: str) -> str:
"""
Searches the public web using the Bing Web Grounding Tool via Azure AI Agent Service.
Returns information about recent updates from the web.
"""
# Create an Azure AI Client
project_client = AIProjectClient.from_connection_string(
credential=DefaultAzureCredential(),
conn_str=AZURE_CONNECTION_STRING,
)
try:
with project_client:
# Get the Bing connection
bing_connection = project_client.connections.get(
connection_name=BING_CONNECTION_NAME
)
conn_id = bing_connection.id
# Initialize agent bing tool
bing = BingGroundingTool(connection_id=conn_id)
# Create agent with the bing tool
agent = project_client.agents.create_agent(
model="gpt-4o", # gpt-4o-mini not supported at this time so we'll use GPT-4o
name="bing-search-agent",
instructions=f"Search the web for information about: {query}. Provide a concise but comprehensive summary.",
tools=bing.definitions,
headers={"x-ms-enable-preview": "true"}
)
# Create thread for communication
thread = project_client.agents.create_thread()
# Create message to thread
project_client.agents.create_message(
thread_id=thread.id,
role="user",
content=query,
)
# Create and process agent run
run = project_client.agents.create_and_process_run(
thread_id=thread.id,
assistant_id=agent.id
)
if run.status == "failed":
result_text = f"Bing search failed: {run.last_error}"
else:
# Fetch messages to get the response
messages = project_client.agents.list_messages(thread_id=thread.id)
# Get the last assistant message
assistant_messages = [m for m in messages.get('data', []) if m.get('role') == 'assistant']
if assistant_messages:
# Extract the text content from the last assistant message
content_list = assistant_messages[-1].get('content', [])
result_text = ""
for content_item in content_list:
if isinstance(content_item, dict) and 'text' in content_item:
result_text += content_item.get('text', "")
if not result_text:
result_text = "No results found."
else:
result_text = "No results found."
# Clean up resources
project_client.agents.delete_agent(agent.id)
except Exception as e:
result_text = f"Bing search failed with error: {str(e)}"
console.print(
Panel(
f"Tool Invoked: Bing Grounding Search\nQuery: {query}",
style="bold magenta"
)
)
return result_text
3.3 Structured Data Tool: Patient Data Query via Azure SQL
def lookup_patient_data(query: str) -> str:
"""
Queries the 'PatientMedicalData' table in Azure SQL and returns the results as a string.
'query' should be a valid SQL statement.
This version uses SQLAlchemy to create an engine, which is fully supported by pandas.read_sql.
"""
try:
# Construct the connection URI for SQLAlchemy
connection_uri = (
f"mssql+pyodbc://{username}:{password}@{server}/{database}"
"?driver=ODBC+Driver+17+for+SQL+Server"
)
engine = sqlalchemy.create_engine(connection_uri)
# Use the engine in pandas.read_sql, which avoids the warning
df = pd.read_sql(query, engine)
if df.empty:
return "No rows found."
return df.to_string(index=False)
except Exception as e:
return f"Database error: {str(e)}"
Step 4: Define Tools for OpenAI Function Calling
Now we'll configure these tools to work with OpenAI's function calling feature, which allows the LLM to dynamically select and use these tools during conversation.
tools = [
{
"type": "function",
"function": {
"name": "search_acc_guidelines",
"description": "Query the ACC guidelines for official cardiology recommendations. Use keywords related to cardiology conditions, treatments, or guidelines.",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "Keywords or specific question related to cardiology guidelines."
}
},
"required": ["query"]
}
}
},
{
"type": "function",
"function": {
"name": "search_bing_grounding",
"description": "Perform a public web search for real-time or external information using Bing Grounding. For example, 'FDA new hyperlipidemia drugs', 'recent hypertension medication approvals'.",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "General query to retrieve public data."
}
},
"required": ["query"]
}
}
},
{
"type": "function",
"function": {
"name": "lookup_patient_data",
"description": (
"Query Azure SQL PatientMedicalData table. Schema: "
"PatientID: INT (PK, Identity), FirstName: VARCHAR(100), LastName: VARCHAR(100), "
"DateOfBirth: DATE, Gender: VARCHAR(20), ContactNumber: VARCHAR(100), EmailAddress: VARCHAR(100), "
"Address: VARCHAR(255), City: VARCHAR(100), PostalCode: VARCHAR(20), Country: VARCHAR(100), "
"MedicalCondition: VARCHAR(255), Medications: VARCHAR(255), Allergies: VARCHAR(255), BloodType: VARCHAR(10), "
"LastVisitDate: DATE, SmokingStatus: VARCHAR(50), AlcoholConsumption: VARCHAR(50), ExerciseFrequency: VARCHAR(50), "
"Occupation: VARCHAR(100), Height_cm: DECIMAL(5,2), Weight_kg: DECIMAL(5,2), BloodPressure: VARCHAR(20), "
"HeartRate_bpm: INT, Temperature_C: DECIMAL(3,1), Notes: VARCHAR(MAX). Use SQL to retrieve patient data."
),
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "Valid SQL query for PatientMedicalData table to get patient information."
}
},
"required": ["query"]
}
}
}
]
# Map the tool names to the actual function implementations
tool_implementations = {
"search_acc_guidelines": search_acc_guidelines,
"search_bing_grounding": search_bing_grounding,
"lookup_patient_data": lookup_patient_data,
}
Step 5: Define System Prompt for the Agent
The system prompt sets the context and instructions for the Agent, defining its role and capabilities.
SYSTEM_PROMPT = (
"You are a cardiology-focused AI assistant with access to three tools:\n"
"1) 'lookup_patient_data' for querying patient records from Azure SQL.\n"
"2) 'search_acc_guidelines' for official ACC guidelines.\n"
"3) 'search_bing_grounding' for real-time public information.\n\n"
"You can call these tools in any order, multiple times if needed, to gather all the context.\n"
"Stop calling tools only when you have enough information to provide a final, cohesive answer.\n"
"Then output your final answer to the user."
)
Step 6: Implement the Multi-Step Agent
Now we'll create the agent that orchestrates the entire process. This agent will:
Receive a user query
Decide which tools to call and in what order
Parse the results from each tool
Synthesize a comprehensive answer
The agent can make multiple calls to different tools before providing a final answer, allowing for complex, multi-step reasoning.
def run_multi_step_agent(user_query: str, max_steps: int = 10):
"""
A multi-step agent that orchestrates tool selection and execution.
Args:
user_query (str): The user's question or request
max_steps (int): Maximum number of reasoning steps before timeout
Returns:
None: Prints the final answer or an error message
"""
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": user_query}
]
for step in range(max_steps):
console.print(Panel(f"**Step {step+1}**: Starting step {step+1}", title="Step Start", style="bold cyan"))
response = openai_client.chat.completions.create(
model=AZURE_OPENAI_CHAT_COMPLETION_DEPLOYED_MODEL_NAME,
messages=messages,
tools=tools,
tool_choice="auto",
max_tokens=8000,
)
response_message = response.choices[0].message
# Add the assistant's message to the conversation
messages.append(response_message)
if response_message.tool_calls:
for tool_call in response_message.tool_calls:
function_name = tool_call.function.name
arguments_str = tool_call.function.arguments
if arguments_str.strip() == "":
function_args = {}
else:
try:
function_args = json.loads(arguments_str)
except json.JSONDecodeError as e:
function_args = {}
console.print(
Panel(
"Warning: Could not decode tool call arguments; defaulting to empty dict.",
title="JSONDecodeError",
style="bold red"
)
)
# Ensure the 'query' key is present
if "query" not in function_args or not function_args["query"]:
function_args["query"] = user_query
# Display which tool is being called
console.print(
Panel(
f"**Step {step+1}**: LLM calls tool [bold]{function_name}[/bold]\n\n"
f"**Arguments**:\n{json.dumps(function_args, indent=2)}",
title="Tool Call",
style="bold blue"
)
)
tool_fn = tool_implementations.get(function_name)
if tool_fn is None:
tool_output = f"[Error] No implementation for tool '{function_name}'."
else:
tool_output = tool_fn(**function_args)
# Add the tool response to the conversation
messages.append({
"tool_call_id": tool_call.id,
"role": "tool",
"name": function_name,
"content": str(tool_output), # Simple string for tool response
})
else:
# Display the final answer
final_answer = response_message.content
console.print(
Panel(
final_answer,
title="Final Answer",
style="bold green",
border_style="yellow"
)
)
return
# If we reach the maximum steps without a final answer, show a warning
console.print(
Panel(
"Max steps reached without a final answer. Stopping.",
title="Warning",
style="bold red"
)
)
return
Step 7: Testing the Multi-Tool Agentic RAG
Let's put our Multi-Tool Agentic RAG to the test with some real-world medical queries! These examples will showcase the agent's ability to orchestrate multiple tools โ SQL Database, Azure AI Search (Medical Guidelines), and Azure AI Agent Service with Bing Grounding Tool (Medical Facts) โ to answer complex questions a doctor might ask.
Example 1: ACC Guidelines for Hypertension Therapy
user_question_1 = "What does the ACC recommend as first-line therapy for hypertension in elderly patients?"
run_multi_step_agent(user_question_1)
Output:
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Final Answer โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ According to the ACC guidelines for the management of hypertension in elderly patients, the first-line โ
โ therapy options include: โ
โ โ
โ 1. **Thiazide Diuretics** - These are often recommended for their effectiveness in lowering blood pressure โ
โ and providing benefits in reducing heart failure risk. โ
โ 2. **Calcium Channel Blockers (CCBs)** - These can be utilized as first-line agents, especially in patients โ
โ with specific comorbid conditions that warrant their use. โ
โ 3. **Angiotensin-Converting Enzyme (ACE) Inhibitors** or **Angiotensin Receptor Blockers (ARBs)** - These โ
โ are also considered for initial treatment, particularly in patients with conditions such as heart failure โ
โ or chronic kidney disease that may benefit from them. โ
โ โ
โ The guidelines emphasize that the choice of therapy should take into account the patient's overall health โ
โ status, comorbidities, and the potential for side effects, especially in older adults who may be more โ
โ vulnerable to hypotension and orthostatic changes. โ
โ โ
โ In practice, many patients will require more than one medication to effectively manage their blood โ
โ pressure, particularly those with stage 2 hypertension (BP >20/10 mm Hg above target). In such cases, โ
โ initiating therapy with two agents is recommended. โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Example 2: Recent Updates on Anticoagulant Therapies
user_question_2 = "Are there any recent updates in 2025 on new anticoagulant therapies from the FDA?"
run_multi_step_agent(user_question_2)
Example 3: Requesting Information from Patient Database
user_question_3 = "How many patients have Hypertension and are prescribed Lisinopril?"
# The agent should generate a valid SQL query, for example:
# Note, the answer will depend on the actual data in the database (it should return 1071!)
run_multi_step_agent(user_question_3)
Output:
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Final Answer โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ There are 1,071 patients who have Hypertension and are prescribed Lisinopril. โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Example 4: A Real-World Medical Doctor Task
run_multi_step_agent("I have a 79-year-old patient named Gloria Paul with hyperlipidemia. She's on Atorvastatin. Can you confirm her medical details from the database, check the ACC guidelines for hyperlipidemia, and see if there are any new medication updates from the FDA as of Feb 2025? Then give me a summary.")
Output:
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Final Answer โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ ### Summary for Patient Gloria Paul โ
โ โ
โ #### Patient Medical Details: โ
โ - **Name:** Gloria Paul โ
โ - **Date of Birth:** July 15, 1944 (79 years old) โ
โ - **Gender:** Other โ
โ - **Location:** Lewisberg, MN, India โ
โ - **Medical Condition:** Hyperlipidemia โ
โ - **Medications:** Atorvastatin โ
โ - **Allergies:** None โ
โ - **Blood Type:** O+ โ
โ - **Last Visit Date:** March 26, 2024 โ
โ - **Lifestyle Factors:** โ
โ - **Smoking Status:** Never smoker โ
โ - **Alcohol Consumption:** Light drinker (1-2 times a week) โ
โ - **Exercise Frequency:** Regular exercise โ
โ โ
โ #### ACC Guidelines for Hyperlipidemia: โ
โ 1. **Class I Recommendations:** โ
โ - Statin treatment is recommended for all patients with atherosclerosis (including carotid or vertebral โ
โ atherosclerosis), aiming to reduce LDL cholesterol below 100 mg/dL. โ
โ โ
โ 2. **Class IIa Recommendations:** โ
โ - Statins may also be considered for ischemic stroke patients to achieve LDL levels around or below 70 โ
โ mg/dL. โ
โ - For patients not achieving desired LDL levels with statins, additional medications such as bile acid โ
โ sequestrants or niacin may be beneficial. โ
โ - Alternatives to statins, like bile acid sequestrants, are reasonable for those intolerant to statins. โ
โ โ
โ 3. **Lifestyle Modifications:** Address lifestyle factors, including obesity or metabolic syndrome, alongsideโ
โ any underlying secondary conditions. โ
โ โ
โ #### FDA Updates (as of February 2025): โ
โ - **Lerodalcibep**: A new biologics license application has been submitted to the FDA targeting low-density โ
โ lipoprotein cholesterol (LDL-C) for the treatment of patients with atherosclerotic cardiovascular disease orโ
โ those with primary hyperlipidemia. โ
โ โ
โ - **Bempedoic Acid Update**: The FDA has updated the indication for bempedoic acid (Nexletol) to treat primaryโ
โ hyperlipidemia among eligible patients with genetic hyperlipidemia or atherosclerotic cardiovascular disease.โ
โ โ
โ ### Conclusion: โ
โ For Mrs. Gloria Paul, FDA updates indicate the potential introduction of new medications for managing โ
โ hyperlipidemia that could complement or enhance her current treatment with atorvastatin. Based on the ACC โ
โ guidelines, continuing atorvastatin is appropriate, with a focus on achieving optimal LDL cholesterol levels.โ
โ Regular check-ups and monitoring of her lipid profile would be essential for optimizing her treatment plan.
Conclusion: Orchestrating Knowledge for Smarter Agents
The Multi-Tool Agentic RAG architecture represents a significant step forward in building truly intelligent AI agents. By moving beyond single-tool limitations and embracing the orchestration of diverse knowledge sources, we can create agents that are far more capable of handling complex, real-world queries and providing comprehensive, insightful responses.
This blog post has demonstrated a foundational Multi-Tool Agentic RAG system leveraging multiple Azure services:
Azure SQL Database for structured patient data
Azure AI Search for unstructured knowledge retrieval
Azure AI Agent Service with Bing Grounding Tool for real-time web information
Azure OpenAI Service with function calling for orchestration
Together, these services create a powerful ecosystem for building sophisticated AI applications that can reason across multiple data sources.
The transition from Bing Search API to the Bing Grounding Tool via Azure AI Agent Service represents the kind of continuous improvements we should be making in our AI architecturesโconstantly looking for tools that provide better context understanding, improved compliance, and more reliable results.
In the next installment of the "Mastering Agentic RAG" series (Part 3), we'll explore Query Planning Agentic RAG, which further enhances the agent's ability to break down complex queries into sub-tasks and strategically plan the execution of tools for even more sophisticated information retrieval. We'll also show advanced query transformation strategies for more effective knowledge retrieval.
Stay tuned, experiment with the code here, and let us know your thoughts and questions in the comments below!