Skip to Content
Enterprise GuidesUnderstanding Agents & RAG

Understanding Agents: Simple, Sequential, and Supervisor Modes

This comprehensive guide explains how to build AI agents in PromptOwl , including the different agent types, how knowledge retrieval (RAG) works, version management, and citation systems.


Table of Contents

  1. Agent Overview
  2. Simple Agents
  3. Sequential Agents
  4. Supervisor Agents (Multi-Agent)
  5. Version Management
  6. RAG: Retrieval Augmented Generation
  7. Citations
  8. Best Practices

Agent Overview

In PromptOwl, an “agent” is an AI-powered prompt that can answer questions, perform tasks, and retrieve information from your documents. Agents come in three types:

TypeBest ForComplexity
SimpleSingle-purpose tasks, Q&ALow
SequentialMulti-step workflowsMedium
SupervisorComplex multi-agent orchestrationHigh

Choosing the Right Agent Type

Need to answer questions from documents? → Simple Agent with RAG Need to process in stages (research → analyze → format)? → Sequential Agent Need multiple specialists working together? → Supervisor Agent

Simple Agents

Simple agents are the foundation of PromptOwl. They consist of a single system context that defines the AI’s behavior.

When to Use Simple Agents

  • FAQ bots and knowledge bases
  • Single-purpose assistants (customer support, onboarding)
  • Document Q&A with citations
  • Basic content generation

Creating a Simple Agent

  1. Click + New on the Dashboard
  2. Keep the default Simple type selected
  3. Enter your agent details:
    • Name: Descriptive name (e.g., “Product Support Bot”)
    • Description: What this agent does
  4. Write your System Context

System Context Best Practices

The system context defines your agent’s personality, capabilities, and constraints:

You are a helpful customer support agent for [Company Name]. Your role: - Answer questions about our products and services - Help users troubleshoot common issues - Escalate complex problems to human support Guidelines: - Be friendly and professional - Only answer based on the provided knowledge base - If you don't know something, say so honestly - Never make up information

Adding Knowledge with RAG

To give your agent access to documents:

  1. Go to the Variables section
  2. Click Add Variable
  3. Name it (e.g., knowledge_base)
  4. Click Connect Data
  5. Select a folder from your Data Room
  6. Reference it in your system context: \{knowledge_base\}

Your system context becomes:

You are a support agent. Use this knowledge base to answer questions: `{knowledge_base}` Only answer based on the information provided above.

Simple Agent Architecture

┌─────────────────────────────────────────────┐ │ User Query │ └──────────────────────┬──────────────────────┘ ┌─────────────────────────────────────────────┐ │ RAG Retrieval (if configured) │ │ Search documents → Return relevant chunks │ └──────────────────────┬──────────────────────┘ ┌─────────────────────────────────────────────┐ │ System Context │ │ Your instructions + Retrieved content │ └──────────────────────┬──────────────────────┘ ┌─────────────────────────────────────────────┐ │ AI Model │ │ Generate response │ └──────────────────────┬──────────────────────┘ ┌─────────────────────────────────────────────┐ │ Response + Citations │ └─────────────────────────────────────────────┘

Sequential Agents

Sequential agents execute multiple steps in order, where each step can have its own AI model, prompt, and data connections.

When to Use Sequential Agents

  • Multi-stage content creation (research → draft → edit)
  • Data processing pipelines
  • Analysis workflows (extract → analyze → summarize)
  • Quality assurance chains (generate → review → refine)

Creating a Sequential Agent

  1. Click + New on the Dashboard
  2. Change the type to Sequential
  3. You’ll see the block-based interface

Understanding Blocks

Each block is a separate AI step. Blocks execute in order from top to bottom.

Block Configuration

SettingDescription
NameDescriptive step name (e.g., “Research”, “Analyze”)
Prompt Source”Inline” (write here) or “Use Existing” (reference another prompt)
AI ModelWhich model handles this step
ToolsTools available to this block
DatasetDocuments for this block’s RAG
VariablesValues passed to this block

Example: Content Creation Pipeline

Block 1: Research

  • Model: GPT-4 (good at analysis)
  • Prompt: “Research the following topic thoroughly: \{topic\}
  • Dataset: Research documents folder

Block 2: Draft

  • Model: Claude 3 (good at writing)
  • Prompt: “Write a draft article based on: {\{research\}}”
  • Variables: research mapped to Block 1 output

Block 3: Polish

  • Model: GPT-4
  • Prompt: “Edit for clarity and grammar: {\{draft\}}”
  • Variables: draft mapped to Block 2 output

Passing Data Between Blocks

Use double curly braces \{\{block-key\}\} to reference previous block outputs:

Block 1 (key: research) Output: "Key findings about market trends..." Block 2 prompt: "Analyze the following research: {`{research}`}" Becomes: "Analyze the following research: Key findings about market trends..."

Block Keys

Each block has a unique key used for referencing:

  • Auto-generated from block name (e.g., “Research” → research)
  • Used in \{\{block-key\}\} syntax
  • Can be customized in block settings

Using Existing Prompts in Blocks

Instead of writing inline, reference existing prompts:

  1. Set Prompt Source to “Use Existing”
  2. Click Select Prompt
  3. Choose from your prompt library
  4. Select the version to use
  5. Map any required variables

This enables:

  • Reusing tested prompts
  • Maintaining single source of truth
  • Version control within blocks

Sequential Agent Architecture

┌─────────────────────────────────────────────┐ │ User Query │ └──────────────────────┬──────────────────────┘ ┌─────────────────────────────────────────────┐ │ Block 1: Research │ │ Model: GPT-4 | Dataset: Research Docs │ │ Output saved as {`{research}`} │ └──────────────────────┬──────────────────────┘ ┌─────────────────────────────────────────────┐ │ Block 2: Draft │ │ Model: Claude 3 | Input: {`{research}`} │ │ Output saved as {`{draft}`} │ └──────────────────────┬──────────────────────┘ ┌─────────────────────────────────────────────┐ │ Block 3: Polish │ │ Model: GPT-4 | Input: {`{draft}`} │ │ Final output returned to user │ └──────────────────────┬──────────────────────┘ ┌─────────────────────────────────────────────┐ │ Response + Citations │ └─────────────────────────────────────────────┘

Human Messages Between Blocks

Add messages shown to users between steps:

  1. Expand block settings
  2. Find Human Message field
  3. Enter the message (e.g., “Analyzing your request…”)

This provides feedback during long workflows.


Supervisor Agents (Multi-Agent)

Supervisor agents use a coordinator that orchestrates multiple specialized agents. The supervisor decides which agent(s) to invoke based on the task.

When to Use Supervisor Agents

  • Tasks requiring different expertise (legal + financial + technical)
  • Dynamic routing based on query type
  • Complex decision-making workflows
  • Parallel agent execution

Creating a Supervisor Agent

  1. Click + New on the Dashboard
  2. Change the type to Supervisor
  3. Configure the supervisor block and agent blocks

![Screenshot: Supervisor Type Selection]

The Supervisor Block

The supervisor block is marked with a special indicator. It:

  • Receives all user queries first
  • Decides which agent(s) to call
  • Coordinates responses from multiple agents
  • Synthesizes final answers

Supervisor Prompt Example

You are a supervisor coordinating a team of specialized agents. Available agents: - Legal Agent: Handles legal questions, contracts, compliance - Technical Agent: Handles technical questions, troubleshooting - Sales Agent: Handles pricing, features, comparisons Your job: 1. Analyze the user's question 2. Route to the appropriate agent(s) 3. Combine responses into a coherent answer If a question spans multiple domains, call multiple agents.

Agent Blocks

Each non-supervisor block is a specialized agent:

SettingDescription
NameAgent specialty (e.g., “Legal Agent”)
PromptAgent-specific instructions
ModelCan differ from other agents
DatasetAgent-specific knowledge base
ToolsAgent-specific tools

Example: Customer Support Supervisor

Supervisor Block:

Route customer queries to the appropriate specialist: - Billing Agent: Payment issues, invoices, refunds - Technical Agent: Product issues, bugs, how-to - Account Agent: Login, settings, profile changes

Billing Agent Block:

  • Dataset: Billing policies folder
  • Prompt: “You are a billing specialist. Help with payment-related questions…”

Technical Agent Block:

  • Dataset: Product documentation folder
  • Prompt: “You are a technical support specialist. Troubleshoot issues…”

Account Agent Block:

  • Dataset: Account FAQ folder
  • Prompt: “You are an account specialist. Help with account management…”

Supervisor Agent Architecture

┌─────────────────────────────────────────────┐ │ User Query │ │ "How do I reset my password?" │ └──────────────────────┬──────────────────────┘ ┌─────────────────────────────────────────────┐ │ Supervisor Block │ │ Analyzes query, decides routing │ │ Decision: Route to Account Agent │ └──────────────────────┬──────────────────────┘ ┌──────────────┼──────────────┐ ↓ ↓ ↓ ┌───────────────┐ ┌───────────────┐ ┌───────────────┐ │ Billing Agent │ │Technical Agent│ │ Account Agent │ │ (skipped) │ │ (skipped) │ │ (invoked) │ └───────────────┘ └───────────────┘ └───────┬───────┘ ┌─────────────────────────────────────────────┐ │ Supervisor Block │ │ Receives agent response, formats output │ └──────────────────────┬──────────────────────┘ ┌─────────────────────────────────────────────┐ │ Response + Citations │ └─────────────────────────────────────────────┘

Multi-Agent Invocation

The supervisor can call multiple agents for complex queries:

User: "I was charged twice and the product doesn't work" Supervisor routes to: 1. Billing Agent → Handles duplicate charge 2. Technical Agent → Troubleshoots product issue Supervisor combines both responses into unified answer.

Version Management

Every change to your agent is tracked through the version system.

Understanding Versions

TermDefinition
DraftWork-in-progress, not visible to users
ProductionActive version users interact with
Version HistoryComplete record of all changes

Version Workflow

Create Agent → Save Draft (v1) Make Changes → Save Draft (v2) Test & Verify → Publish (v2 becomes Production) Make More Changes → Save Draft (v3) Problem Found → Rollback to v2 (v2 republished)

Saving Drafts

Click Save to create a new draft version:

  • Preserves all current settings
  • Does not affect production
  • Allows testing before publishing

![Screenshot: Save Button]

Publishing a Version

Click Publish to make a version live:

  1. Click Publish in the editor
  2. Add change notes describing updates
  3. Confirm the publication

Viewing Version History

  1. Open the Versions panel (right sidebar)
  2. See all versions with:
    • Version number
    • Creation date
    • Creator name
    • Change notes (if added)
    • Production indicator

Comparing Versions

To understand what changed between versions:

  1. Click on a version to preview it
  2. Compare settings, prompts, and configurations
  3. Decide whether to restore or continue

Rolling Back

To revert to a previous version:

  1. Find the version in history
  2. Click Publish on that version
  3. Confirm the rollback

Note: This creates a new version based on the old one. No versions are deleted.

Version Best Practices

  • Add change notes for every publish
  • Test in preview before publishing
  • Keep production stable - only publish tested changes
  • Use drafts freely - they don’t affect users

RAG: Retrieval Augmented Generation

RAG enables your agents to answer questions using your documents. Understanding RAG is key to building effective knowledge-based agents.

How RAG Works

User asks: "What is the return policy?" ┌───────────────────────────────────────┐ │ 1. SEARCH │ │ Query your document database │ │ Find relevant passages │ └────────────────────┬──────────────────┘ ┌───────────────────────────────────────┐ │ 2. RETRIEVE │ │ Extract matching text chunks │ │ Rank by relevance (similarity score) │ └────────────────────┬──────────────────┘ ┌───────────────────────────────────────┐ │ 3. AUGMENT │ │ Inject retrieved text into prompt │ │ Give AI context to answer │ └────────────────────┬──────────────────┘ ┌───────────────────────────────────────┐ │ 4. GENERATE │ │ AI generates answer using context │ │ Cites sources from documents │ └───────────────────────────────────────┘

Two Ways to Connect Documents

PromptOwl offers two methods to connect documents, each with different behaviors:

Method 1: Prompt-Level RAG (System Context)

Connect documents to the entire agent via variables.

How to set up:

  1. Add a variable (e.g., knowledge_base)
  2. Connect it to a folder
  3. Reference in system context: \{knowledge_base\}

Behavior:

  • Documents retrieved automatically on every message
  • Content injected into system context before AI sees the query
  • Available to all blocks in sequential/supervisor workflows
  • Best for: Core knowledge that applies to all queries

Method 2: Block-Level RAG (Dataset)

Connect documents to specific blocks.

How to set up:

  1. Expand block settings
  2. Find Dataset field
  3. Select a folder or document

Behavior:

  • Documents retrieved only when that block executes
  • AI decides when to search based on the query
  • Each block can have different documents
  • Best for: Specialized knowledge per step/agent

Comparing RAG Methods

AspectPrompt-LevelBlock-Level
TimingEvery queryOn-demand
ScopeEntire agentSingle block
ControlAutomaticAI-decided
Use CaseCore knowledgeSpecialized knowledge
CitationsCombinedPer-block

When to Use Each Method

Use Prompt-Level RAG when:

  • Documents are always relevant
  • Building a simple Q&A bot
  • Need consistent knowledge access

Use Block-Level RAG when:

  • Different steps need different documents
  • Building specialized agents
  • Want AI to decide when to search
  • Optimizing for performance (not searching unnecessarily)

Combining Both Methods

For complex agents, combine both approaches:

Supervisor Agent ├── Prompt-Level: Company policies (always available) ├── Billing Agent Block │ └── Dataset: Billing documentation ├── Technical Agent Block │ └── Dataset: Product manuals └── HR Agent Block └── Dataset: Employee handbook

Document Processing for RAG

When you sync documents, they’re processed for AI search:

  1. Chunking: Documents split into ~1000 character pieces
  2. Embedding: Each chunk converted to a vector representation
  3. Indexing: Vectors stored in searchable database
  4. Metadata: Title, author, date preserved for citations

Sync Status and RAG

Documents must be synced before RAG works:

StatusRAG Available?Action Needed
Synced (Green)YesNone
Modified (Orange)PartialRe-sync folder
Unsynced (Red)NoSync folder

Always check sync status when RAG isn’t returning expected results.


Citations

Citations show users where answers come from, building trust and enabling verification.

How Citations Work

When RAG retrieves documents, citation data is captured:

Retrieved chunk: "Our return policy allows returns within 30 days..." Citation data: - Title: "Return Policy Guide" - Author: "Customer Service Team" - Date: "January 2024" - Source: "Policies/Returns.pdf" - Score: 0.89 (89% relevance)

Citation Display Modes

PromptOwl offers two display modes:

Non-Aggregated (Default)

Shows individual citations:

[AI Response] 📄 Sources: • Return Policy Guide - "Our return policy allows returns within 30 days..." [View More Citations (3)]

Aggregated

Groups citations by document:

[AI Response] 📚 Sources: ├── Return Policy Guide (2 references) ├── FAQ Document (1 reference) └── Terms of Service (1 reference)

Configuring Citations

In your prompt settings, configure citation display:

SettingDescription
Aggregate CitationsGroup by document vs. show individually
Show Similarity ScoreDisplay relevance percentage
Show AuthorDisplay document author
Show Publish DateDisplay document date

Citation Data Fields

Each citation includes:

FieldDescriptionExample
TitleDocument name”Return Policy Guide”
Display NameFriendly name”Returns FAQ”
AuthorCreator”Support Team”
Publish DateDate created”Jan 15, 2024”
TextRetrieved passage”Returns within 30 days…”
ScoreRelevance (0-1)0.89
URLLink to source”https://…”

Improving Citation Quality

To get better citations:

  1. Set Display Names on documents
  2. Add Authors during upload
  3. Include Publish Dates for currency
  4. Write clear document titles
  5. Structure documents well (headers, sections)

Citations in Sequential/Supervisor Workflows

Citations accumulate across all blocks:

Block 1 (Research) → Citations A, B Block 2 (Analyze) → Citations C Block 3 (Format) → No new citations Final response includes: Citations A, B, C

Duplicates are automatically removed.

Viewing Full Citations

Click on any citation to open the full citation modal:

  • Complete text passage
  • All metadata fields
  • Link to original document (if available)
  • Similarity score (if enabled)

Best Practices

Simple Agent Best Practices

  • Keep system context focused and clear
  • Connect only relevant document folders
  • Test with various query types
  • Enable citations for transparency

Sequential Agent Best Practices

  • Name blocks descriptively (action-oriented)
  • Use appropriate models per step (GPT-4 for analysis, Claude for writing)
  • Map variables explicitly between blocks
  • Keep chain length reasonable (3-5 blocks)
  • Add human messages for long workflows

Supervisor Agent Best Practices

  • Write clear agent descriptions for routing
  • Give agents distinct, non-overlapping roles
  • Test edge cases where routing is ambiguous
  • Consider fallback handling
  • Keep agent count manageable (2-5 agents)

RAG Best Practices

  • Organize documents into logical folders
  • Keep documents focused (don’t combine unrelated content)
  • Sync regularly after updates
  • Test retrieval with expected queries
  • Use block-level RAG for specialized knowledge

Citation Best Practices

  • Always enable for document-based agents
  • Fill in document metadata during upload
  • Use aggregated mode for many citations
  • Review citation quality in Monitor

Version Management Best Practices

  • Save drafts frequently while editing
  • Add meaningful change notes
  • Test thoroughly before publishing
  • Keep production versions stable
  • Use rollback when issues arise

Troubleshooting

Agent not using documents

  1. Check folder sync status (must be green)
  2. Verify variable is connected properly
  3. Confirm variable is referenced in prompt
  4. Test with simple, direct questions

Sequential blocks not passing data

  1. Check \{\{block-key\}\} syntax
  2. Verify block key matches exactly
  3. Ensure previous block generates output
  4. Check for typos in variable names

Supervisor not routing correctly

  1. Review supervisor prompt clarity
  2. Ensure agent roles don’t overlap
  3. Add explicit routing instructions
  4. Test with clear-cut queries first

Citations not appearing

  1. Verify RAG is configured
  2. Check citation settings are enabled
  3. Ensure documents are synced
  4. Look for similarity score threshold issues

Version publish not taking effect

  1. Refresh the page
  2. Check you published (not just saved)
  3. Verify you’re on the correct prompt
  4. Clear browser cache if needed

Last updated on