Understanding Agents: Simple, Sequential, and Supervisor Modes
This comprehensive guide explains how to build AI agents in PromptOwl , including the different agent types, how knowledge retrieval (RAG) works, version management, and citation systems.
Table of Contents
- Agent Overview
- Simple Agents
- Sequential Agents
- Supervisor Agents (Multi-Agent)
- Version Management
- RAG: Retrieval Augmented Generation
- Citations
- Best Practices
Agent Overview
In PromptOwl, an “agent” is an AI-powered prompt that can answer questions, perform tasks, and retrieve information from your documents. Agents come in three types:
| Type | Best For | Complexity |
|---|---|---|
| Simple | Single-purpose tasks, Q&A | Low |
| Sequential | Multi-step workflows | Medium |
| Supervisor | Complex multi-agent orchestration | High |
Choosing the Right Agent Type
Need to answer questions from documents?
→ Simple Agent with RAG
Need to process in stages (research → analyze → format)?
→ Sequential Agent
Need multiple specialists working together?
→ Supervisor AgentSimple Agents
Simple agents are the foundation of PromptOwl. They consist of a single system context that defines the AI’s behavior.
When to Use Simple Agents
- FAQ bots and knowledge bases
- Single-purpose assistants (customer support, onboarding)
- Document Q&A with citations
- Basic content generation
Creating a Simple Agent
- Click + New on the Dashboard
- Keep the default Simple type selected
- Enter your agent details:
- Name: Descriptive name (e.g., “Product Support Bot”)
- Description: What this agent does
- Write your System Context
System Context Best Practices
The system context defines your agent’s personality, capabilities, and constraints:
You are a helpful customer support agent for [Company Name].
Your role:
- Answer questions about our products and services
- Help users troubleshoot common issues
- Escalate complex problems to human support
Guidelines:
- Be friendly and professional
- Only answer based on the provided knowledge base
- If you don't know something, say so honestly
- Never make up informationAdding Knowledge with RAG
To give your agent access to documents:
- Go to the Variables section
- Click Add Variable
- Name it (e.g.,
knowledge_base) - Click Connect Data
- Select a folder from your Data Room
- Reference it in your system context:
\{knowledge_base\}
Your system context becomes:
You are a support agent. Use this knowledge base to answer questions:
`{knowledge_base}`
Only answer based on the information provided above.Simple Agent Architecture
┌─────────────────────────────────────────────┐
│ User Query │
└──────────────────────┬──────────────────────┘
↓
┌─────────────────────────────────────────────┐
│ RAG Retrieval (if configured) │
│ Search documents → Return relevant chunks │
└──────────────────────┬──────────────────────┘
↓
┌─────────────────────────────────────────────┐
│ System Context │
│ Your instructions + Retrieved content │
└──────────────────────┬──────────────────────┘
↓
┌─────────────────────────────────────────────┐
│ AI Model │
│ Generate response │
└──────────────────────┬──────────────────────┘
↓
┌─────────────────────────────────────────────┐
│ Response + Citations │
└─────────────────────────────────────────────┘Sequential Agents
Sequential agents execute multiple steps in order, where each step can have its own AI model, prompt, and data connections.
When to Use Sequential Agents
- Multi-stage content creation (research → draft → edit)
- Data processing pipelines
- Analysis workflows (extract → analyze → summarize)
- Quality assurance chains (generate → review → refine)
Creating a Sequential Agent
- Click + New on the Dashboard
- Change the type to Sequential
- You’ll see the block-based interface
Understanding Blocks
Each block is a separate AI step. Blocks execute in order from top to bottom.
Block Configuration
| Setting | Description |
|---|---|
| Name | Descriptive step name (e.g., “Research”, “Analyze”) |
| Prompt Source | ”Inline” (write here) or “Use Existing” (reference another prompt) |
| AI Model | Which model handles this step |
| Tools | Tools available to this block |
| Dataset | Documents for this block’s RAG |
| Variables | Values passed to this block |
Example: Content Creation Pipeline
Block 1: Research
- Model: GPT-4 (good at analysis)
- Prompt: “Research the following topic thoroughly:
\{topic\}” - Dataset: Research documents folder
Block 2: Draft
- Model: Claude 3 (good at writing)
- Prompt: “Write a draft article based on: {
\{research\}}” - Variables:
researchmapped to Block 1 output
Block 3: Polish
- Model: GPT-4
- Prompt: “Edit for clarity and grammar: {
\{draft\}}” - Variables:
draftmapped to Block 2 output
Passing Data Between Blocks
Use double curly braces \{\{block-key\}\} to reference previous block outputs:
Block 1 (key: research)
Output: "Key findings about market trends..."
Block 2 prompt:
"Analyze the following research: {`{research}`}"
Becomes: "Analyze the following research: Key findings about market trends..."Block Keys
Each block has a unique key used for referencing:
- Auto-generated from block name (e.g., “Research” →
research) - Used in
\{\{block-key\}\}syntax - Can be customized in block settings
Using Existing Prompts in Blocks
Instead of writing inline, reference existing prompts:
- Set Prompt Source to “Use Existing”
- Click Select Prompt
- Choose from your prompt library
- Select the version to use
- Map any required variables
This enables:
- Reusing tested prompts
- Maintaining single source of truth
- Version control within blocks
Sequential Agent Architecture
┌─────────────────────────────────────────────┐
│ User Query │
└──────────────────────┬──────────────────────┘
↓
┌─────────────────────────────────────────────┐
│ Block 1: Research │
│ Model: GPT-4 | Dataset: Research Docs │
│ Output saved as {`{research}`} │
└──────────────────────┬──────────────────────┘
↓
┌─────────────────────────────────────────────┐
│ Block 2: Draft │
│ Model: Claude 3 | Input: {`{research}`} │
│ Output saved as {`{draft}`} │
└──────────────────────┬──────────────────────┘
↓
┌─────────────────────────────────────────────┐
│ Block 3: Polish │
│ Model: GPT-4 | Input: {`{draft}`} │
│ Final output returned to user │
└──────────────────────┬──────────────────────┘
↓
┌─────────────────────────────────────────────┐
│ Response + Citations │
└─────────────────────────────────────────────┘Human Messages Between Blocks
Add messages shown to users between steps:
- Expand block settings
- Find Human Message field
- Enter the message (e.g., “Analyzing your request…”)
This provides feedback during long workflows.
Supervisor Agents (Multi-Agent)
Supervisor agents use a coordinator that orchestrates multiple specialized agents. The supervisor decides which agent(s) to invoke based on the task.
When to Use Supervisor Agents
- Tasks requiring different expertise (legal + financial + technical)
- Dynamic routing based on query type
- Complex decision-making workflows
- Parallel agent execution
Creating a Supervisor Agent
- Click + New on the Dashboard
- Change the type to Supervisor
- Configure the supervisor block and agent blocks
![Screenshot: Supervisor Type Selection]
The Supervisor Block
The supervisor block is marked with a special indicator. It:
- Receives all user queries first
- Decides which agent(s) to call
- Coordinates responses from multiple agents
- Synthesizes final answers
Supervisor Prompt Example
You are a supervisor coordinating a team of specialized agents.
Available agents:
- Legal Agent: Handles legal questions, contracts, compliance
- Technical Agent: Handles technical questions, troubleshooting
- Sales Agent: Handles pricing, features, comparisons
Your job:
1. Analyze the user's question
2. Route to the appropriate agent(s)
3. Combine responses into a coherent answer
If a question spans multiple domains, call multiple agents.Agent Blocks
Each non-supervisor block is a specialized agent:
| Setting | Description |
|---|---|
| Name | Agent specialty (e.g., “Legal Agent”) |
| Prompt | Agent-specific instructions |
| Model | Can differ from other agents |
| Dataset | Agent-specific knowledge base |
| Tools | Agent-specific tools |
Example: Customer Support Supervisor
Supervisor Block:
Route customer queries to the appropriate specialist:
- Billing Agent: Payment issues, invoices, refunds
- Technical Agent: Product issues, bugs, how-to
- Account Agent: Login, settings, profile changesBilling Agent Block:
- Dataset: Billing policies folder
- Prompt: “You are a billing specialist. Help with payment-related questions…”
Technical Agent Block:
- Dataset: Product documentation folder
- Prompt: “You are a technical support specialist. Troubleshoot issues…”
Account Agent Block:
- Dataset: Account FAQ folder
- Prompt: “You are an account specialist. Help with account management…”
Supervisor Agent Architecture
┌─────────────────────────────────────────────┐
│ User Query │
│ "How do I reset my password?" │
└──────────────────────┬──────────────────────┘
↓
┌─────────────────────────────────────────────┐
│ Supervisor Block │
│ Analyzes query, decides routing │
│ Decision: Route to Account Agent │
└──────────────────────┬──────────────────────┘
↓
┌──────────────┼──────────────┐
↓ ↓ ↓
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│ Billing Agent │ │Technical Agent│ │ Account Agent │
│ (skipped) │ │ (skipped) │ │ (invoked) │
└───────────────┘ └───────────────┘ └───────┬───────┘
↓
┌─────────────────────────────────────────────┐
│ Supervisor Block │
│ Receives agent response, formats output │
└──────────────────────┬──────────────────────┘
↓
┌─────────────────────────────────────────────┐
│ Response + Citations │
└─────────────────────────────────────────────┘Multi-Agent Invocation
The supervisor can call multiple agents for complex queries:
User: "I was charged twice and the product doesn't work"
Supervisor routes to:
1. Billing Agent → Handles duplicate charge
2. Technical Agent → Troubleshoots product issue
Supervisor combines both responses into unified answer.Version Management
Every change to your agent is tracked through the version system.
Understanding Versions
| Term | Definition |
|---|---|
| Draft | Work-in-progress, not visible to users |
| Production | Active version users interact with |
| Version History | Complete record of all changes |
Version Workflow
Create Agent → Save Draft (v1)
↓
Make Changes → Save Draft (v2)
↓
Test & Verify → Publish (v2 becomes Production)
↓
Make More Changes → Save Draft (v3)
↓
Problem Found → Rollback to v2 (v2 republished)Saving Drafts
Click Save to create a new draft version:
- Preserves all current settings
- Does not affect production
- Allows testing before publishing
![Screenshot: Save Button]
Publishing a Version
Click Publish to make a version live:
- Click Publish in the editor
- Add change notes describing updates
- Confirm the publication
Viewing Version History
- Open the Versions panel (right sidebar)
- See all versions with:
- Version number
- Creation date
- Creator name
- Change notes (if added)
- Production indicator
Comparing Versions
To understand what changed between versions:
- Click on a version to preview it
- Compare settings, prompts, and configurations
- Decide whether to restore or continue
Rolling Back
To revert to a previous version:
- Find the version in history
- Click Publish on that version
- Confirm the rollback
Note: This creates a new version based on the old one. No versions are deleted.
Version Best Practices
- Add change notes for every publish
- Test in preview before publishing
- Keep production stable - only publish tested changes
- Use drafts freely - they don’t affect users
RAG: Retrieval Augmented Generation
RAG enables your agents to answer questions using your documents. Understanding RAG is key to building effective knowledge-based agents.
How RAG Works
User asks: "What is the return policy?"
↓
┌───────────────────────────────────────┐
│ 1. SEARCH │
│ Query your document database │
│ Find relevant passages │
└────────────────────┬──────────────────┘
↓
┌───────────────────────────────────────┐
│ 2. RETRIEVE │
│ Extract matching text chunks │
│ Rank by relevance (similarity score) │
└────────────────────┬──────────────────┘
↓
┌───────────────────────────────────────┐
│ 3. AUGMENT │
│ Inject retrieved text into prompt │
│ Give AI context to answer │
└────────────────────┬──────────────────┘
↓
┌───────────────────────────────────────┐
│ 4. GENERATE │
│ AI generates answer using context │
│ Cites sources from documents │
└───────────────────────────────────────┘Two Ways to Connect Documents
PromptOwl offers two methods to connect documents, each with different behaviors:
Method 1: Prompt-Level RAG (System Context)
Connect documents to the entire agent via variables.
How to set up:
- Add a variable (e.g.,
knowledge_base) - Connect it to a folder
- Reference in system context:
\{knowledge_base\}
Behavior:
- Documents retrieved automatically on every message
- Content injected into system context before AI sees the query
- Available to all blocks in sequential/supervisor workflows
- Best for: Core knowledge that applies to all queries
Method 2: Block-Level RAG (Dataset)
Connect documents to specific blocks.
How to set up:
- Expand block settings
- Find Dataset field
- Select a folder or document
Behavior:
- Documents retrieved only when that block executes
- AI decides when to search based on the query
- Each block can have different documents
- Best for: Specialized knowledge per step/agent
Comparing RAG Methods
| Aspect | Prompt-Level | Block-Level |
|---|---|---|
| Timing | Every query | On-demand |
| Scope | Entire agent | Single block |
| Control | Automatic | AI-decided |
| Use Case | Core knowledge | Specialized knowledge |
| Citations | Combined | Per-block |
When to Use Each Method
Use Prompt-Level RAG when:
- Documents are always relevant
- Building a simple Q&A bot
- Need consistent knowledge access
Use Block-Level RAG when:
- Different steps need different documents
- Building specialized agents
- Want AI to decide when to search
- Optimizing for performance (not searching unnecessarily)
Combining Both Methods
For complex agents, combine both approaches:
Supervisor Agent
├── Prompt-Level: Company policies (always available)
│
├── Billing Agent Block
│ └── Dataset: Billing documentation
│
├── Technical Agent Block
│ └── Dataset: Product manuals
│
└── HR Agent Block
└── Dataset: Employee handbookDocument Processing for RAG
When you sync documents, they’re processed for AI search:
- Chunking: Documents split into ~1000 character pieces
- Embedding: Each chunk converted to a vector representation
- Indexing: Vectors stored in searchable database
- Metadata: Title, author, date preserved for citations
Sync Status and RAG
Documents must be synced before RAG works:
| Status | RAG Available? | Action Needed |
|---|---|---|
| Synced (Green) | Yes | None |
| Modified (Orange) | Partial | Re-sync folder |
| Unsynced (Red) | No | Sync folder |
Always check sync status when RAG isn’t returning expected results.
Citations
Citations show users where answers come from, building trust and enabling verification.
How Citations Work
When RAG retrieves documents, citation data is captured:
Retrieved chunk:
"Our return policy allows returns within 30 days..."
Citation data:
- Title: "Return Policy Guide"
- Author: "Customer Service Team"
- Date: "January 2024"
- Source: "Policies/Returns.pdf"
- Score: 0.89 (89% relevance)Citation Display Modes
PromptOwl offers two display modes:
Non-Aggregated (Default)
Shows individual citations:
[AI Response]
📄 Sources:
• Return Policy Guide - "Our return policy allows returns within 30 days..."
[View More Citations (3)]Aggregated
Groups citations by document:
[AI Response]
📚 Sources:
├── Return Policy Guide (2 references)
├── FAQ Document (1 reference)
└── Terms of Service (1 reference)Configuring Citations
In your prompt settings, configure citation display:
| Setting | Description |
|---|---|
| Aggregate Citations | Group by document vs. show individually |
| Show Similarity Score | Display relevance percentage |
| Show Author | Display document author |
| Show Publish Date | Display document date |
Citation Data Fields
Each citation includes:
| Field | Description | Example |
|---|---|---|
| Title | Document name | ”Return Policy Guide” |
| Display Name | Friendly name | ”Returns FAQ” |
| Author | Creator | ”Support Team” |
| Publish Date | Date created | ”Jan 15, 2024” |
| Text | Retrieved passage | ”Returns within 30 days…” |
| Score | Relevance (0-1) | 0.89 |
| URL | Link to source | ”https://…” |
Improving Citation Quality
To get better citations:
- Set Display Names on documents
- Add Authors during upload
- Include Publish Dates for currency
- Write clear document titles
- Structure documents well (headers, sections)
Citations in Sequential/Supervisor Workflows
Citations accumulate across all blocks:
Block 1 (Research) → Citations A, B
Block 2 (Analyze) → Citations C
Block 3 (Format) → No new citations
Final response includes: Citations A, B, CDuplicates are automatically removed.
Viewing Full Citations
Click on any citation to open the full citation modal:
- Complete text passage
- All metadata fields
- Link to original document (if available)
- Similarity score (if enabled)
Best Practices
Simple Agent Best Practices
- Keep system context focused and clear
- Connect only relevant document folders
- Test with various query types
- Enable citations for transparency
Sequential Agent Best Practices
- Name blocks descriptively (action-oriented)
- Use appropriate models per step (GPT-4 for analysis, Claude for writing)
- Map variables explicitly between blocks
- Keep chain length reasonable (3-5 blocks)
- Add human messages for long workflows
Supervisor Agent Best Practices
- Write clear agent descriptions for routing
- Give agents distinct, non-overlapping roles
- Test edge cases where routing is ambiguous
- Consider fallback handling
- Keep agent count manageable (2-5 agents)
RAG Best Practices
- Organize documents into logical folders
- Keep documents focused (don’t combine unrelated content)
- Sync regularly after updates
- Test retrieval with expected queries
- Use block-level RAG for specialized knowledge
Citation Best Practices
- Always enable for document-based agents
- Fill in document metadata during upload
- Use aggregated mode for many citations
- Review citation quality in Monitor
Version Management Best Practices
- Save drafts frequently while editing
- Add meaningful change notes
- Test thoroughly before publishing
- Keep production versions stable
- Use rollback when issues arise
Troubleshooting
Agent not using documents
- Check folder sync status (must be green)
- Verify variable is connected properly
- Confirm variable is referenced in prompt
- Test with simple, direct questions
Sequential blocks not passing data
- Check
\{\{block-key\}\}syntax - Verify block key matches exactly
- Ensure previous block generates output
- Check for typos in variable names
Supervisor not routing correctly
- Review supervisor prompt clarity
- Ensure agent roles don’t overlap
- Add explicit routing instructions
- Test with clear-cut queries first
Citations not appearing
- Verify RAG is configured
- Check citation settings are enabled
- Ensure documents are synced
- Look for similarity score threshold issues
Version publish not taking effect
- Refresh the page
- Check you published (not just saved)
- Verify you’re on the correct prompt
- Clear browser cache if needed