Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 36 additions & 8 deletions knowledge/concepts/search-and-retrieval/overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -92,37 +92,65 @@
Start with hybrid search and add a reranker for best results.
</Tip>

## Agentic vs Traditional RAG

Check warning on line 95 in knowledge/concepts/search-and-retrieval/overview.mdx

View check run for this annotation

Mintlify / Mintlify Validation (agno-v2) - vale-spellcheck

knowledge/concepts/search-and-retrieval/overview.mdx#L95

Did you really mean 'Agentic'?

**Traditional RAG** always searches with the exact user query and injects results into the prompt.

**Agentic RAG** lets the agent decide when to search, reformulate queries, and run follow-up searches if needed.

Check warning on line 99 in knowledge/concepts/search-and-retrieval/overview.mdx

View check run for this annotation

Mintlify / Mintlify Validation (agno-v2) - vale-spellcheck

knowledge/concepts/search-and-retrieval/overview.mdx#L99

Did you really mean 'Agentic'?

<Tabs>
<Tab title="Traditional RAG">
```python
# Always searches, always injects results
results = knowledge.search(user_query)
context = "\n\n".join([d.content for d in results])
response = llm.generate(user_query + "\n" + context)
from agno.agent import Agent
from agno.knowledge.knowledge import Knowledge
from agno.knowledge.embedder.openai import OpenAIEmbedder
from agno.models.openai import OpenAIChat
from agno.vectordb.lancedb import LanceDb, SearchType

knowledge = Knowledge(
vector_db=LanceDb(
table_name="recipes",
uri="tmp/lancedb",
search_type=SearchType.vector,
embedder=OpenAIEmbedder(id="text-embedding-3-small"),
),
)

agent = Agent(
model=OpenAIChat(id="gpt-4o"),
knowledge=knowledge,
add_knowledge_to_context=True, # Always inject knowledge into prompt
search_knowledge=False, # Disable agentic search
)
```
</Tab>
<Tab title="Agentic RAG">
```python
from agno.agent import Agent
from agno.knowledge.knowledge import Knowledge
from agno.knowledge.embedder.openai import OpenAIEmbedder
from agno.models.openai import OpenAIChat
from agno.vectordb.lancedb import LanceDb, SearchType

knowledge = Knowledge(
vector_db=LanceDb(
table_name="recipes",
uri="tmp/lancedb",
search_type=SearchType.vector,
embedder=OpenAIEmbedder(id="text-embedding-3-small"),
),
)

# Agent decides when to search
agent = Agent(
model=OpenAIChat(id="gpt-4o"),
knowledge=knowledge,
search_knowledge=True, # Agent calls search_knowledge_base tool when needed
search_knowledge=True, # Agent decides when to search
)

agent.print_response("What's our return policy?")
```
</Tab>
</Tabs>

With Agentic RAG, the agent can:

Check warning on line 153 in knowledge/concepts/search-and-retrieval/overview.mdx

View check run for this annotation

Mintlify / Mintlify Validation (agno-v2) - vale-spellcheck

knowledge/concepts/search-and-retrieval/overview.mdx#L153

Did you really mean 'Agentic'?
- Skip searching when it already knows the answer
- Reformulate queries for better results
- Run multiple searches to gather complete information
Expand Down Expand Up @@ -187,7 +215,7 @@

### Embedding Model

Your embedder converts text into vectors that capture meaning. The right choice depends on your content:

Check warning on line 218 in knowledge/concepts/search-and-retrieval/overview.mdx

View check run for this annotation

Mintlify / Mintlify Validation (agno-v2) - vale-spellcheck

knowledge/concepts/search-and-retrieval/overview.mdx#L218

Did you really mean 'embedder'?

| Type | Use Case |
|------|----------|
Expand Down
Loading