Skip to Content
Enterprise GuidesAPI Publishing & Embedding

API Publishing and Embedding

This guide explains how to publish prompts as APIs, generate API keys, embed chatbots in external applications, and integrate PromptOwl  with your systems.


Table of Contents

  1. Publishing Overview
  2. Making a Prompt Live
  3. Generating API Keys
  4. Using the API
  5. Embedding Chatbots
  6. Customizing Embedded Chat
  7. Variables and Parameters
  8. Streaming Responses
  9. Conversation Management
  10. Security Considerations
  11. Best Practices
  12. Troubleshooting

Publishing Overview

PromptOwl allows you to expose your prompts as APIs for external applications.

What You Can Do

FeatureDescription
API AccessCall prompts via HTTP POST
Embed ChatbotAdd chat widget to any website
Custom VariablesPass runtime parameters
Conversation HistoryMaintain context across calls
Model OverrideChange LLM settings per request

Integration Options

MethodBest For
REST APIBackend integrations, apps
iFrame EmbedWebsite chat widgets
JavaScriptCustom web implementations

Making a Prompt Live

Before a prompt can be accessed via API, it must be set to “Live” status.

Publishing a Prompt

  1. Open your prompt
  2. Navigate to Publish tab
  3. Toggle status to Live
  4. Prompt is now accessible via API

Live vs Draft Status

StatusAPI AccessInternal Use
LiveEnabledYes
DraftBlockedYes

Checking Publish Status

  • Live prompts show green indicator
  • Draft prompts show gray indicator
  • Status visible on prompt card and publish page

Note: Non-live prompts return error 400 when called via API.


Generating API Keys

API keys authenticate external requests to your prompts.

Creating an API Key

  1. Open your prompt
  2. Go to Publish tab
  3. Find API Key section
  4. Click Generate API Key
  5. Copy the key immediately

API Key Format

po_[64-character-hexadecimal-string]

Example:

po_a1b2c3d4e5f6...

Important: Save Your Key

WarningDetails
One-time displayKey shown only at generation
Cannot retrieveNo way to view key again
Regenerate if lostCreates new key, invalidates old

Regenerating Keys

If you lose or need to rotate your key:

  1. Go to Publish tab
  2. Click Regenerate API Key
  3. Old key immediately invalidated
  4. New key generated
  5. Update all integrations

Key Properties

PropertyDescription
One per promptSingle active key per prompt
User-boundTied to your account
ToggleableCan enable/disable
SecureStored as hash, never plaintext

Using the API

Endpoint

POST https://your-domain.com/api/prompt/{promptId}

Authentication

Include API key in header:

X-API-Key: po_your-api-key-here

Basic Request

curl -X POST https://promptowl.ai/api/prompt/YOUR_PROMPT_ID \ -H "Content-Type: application/json" \ -H "X-API-Key: po_your-api-key" \ -d '{ "sessionId": "user-123", "message": "Hello, how can you help me?" }'

Request Body

FieldTypeRequiredDescription
sessionIdstringYesUser/session identifier
messagestringYesUser’s input message
previousMessagesarrayNoConversation history
variablesobjectNoRuntime variable values
llmTypestringNoOverride model provider
llmSettingsobjectNoOverride model parameters
streamingbooleanNoEnable streaming response
conversationIdstringNoContinue specific conversation

Response Format

{ "id": "conv_abc123", "conversationId": "conv_abc123", "messages": [ { "role": "system", "content": "You are a helpful assistant..." }, { "role": "user", "content": "Hello, how can you help me?" }, { "role": "assistant", "content": "Hello! I'm here to help you with..." } ], "totalTokenUsed": 150, "citations": [], "previousMessages": [] }

Response Fields

FieldDescription
idConversation identifier
conversationIdSame as id, for reference
messagesFull message history
totalTokenUsedToken count for billing
citationsSource references (if RAG)

Embedding Chatbots

Embed a chat widget directly in any website.

Getting Embed Code

  1. Open your prompt
  2. Go to Publish tab
  3. Find Chatbot Embed Generator
  4. Copy the embed code

iFrame Embed

<iframe width="480px" height="860px" src="https://promptowl.ai/chatPopup/SESSION_ID/PROMPT_ID" style="border: none;"> </iframe>

JavaScript Embed

<script> window.PromptOwlConfig = { promptId: "YOUR_PROMPT_ID", sessionId: "user-" + Date.now(), position: "bottom-right" }; </script> <script src="https://promptowl.ai/embed.js"></script>

Embed Parameters

ParameterDescriptionExample
SESSION_IDUser identifieruser-123
PROMPT_IDYour prompt IDabc123def456

Customizing Embedded Chat

Color Customization

Configure chat appearance in the publish interface:

SettingDescription
Header BackgroundChat header color
Header TextHeader text color
User BubbleUser message background
User TextUser message text color

Branding Options

OptionDescription
Hide LogoRemove PromptOwl branding
Custom ColorsMatch your brand
SizeAdjust width/height

Size Recommendations

Use CaseWidthHeight
Sidebar widget350px500px
Full panel480px860px
Mobile100%100%

Variables and Parameters

Pass dynamic values to your prompts at runtime.

Using Variables

Include variables in your API request:

{ "sessionId": "user-123", "message": "What's my account status?", "variables": { "user_name": "John Smith", "account_id": "ACC-12345", "subscription": "Premium" } }

Variable Syntax

In your prompt, reference variables with:

Hello `{user_name}`, your account `{account_id}` is on the `{subscription}` plan.

Overriding LLM Settings

Change model settings per request:

{ "sessionId": "user-123", "message": "Write a creative story", "llmType": "openai", "llmSettings": { "model": "gpt-4", "temperature": 1.2, "max_tokens": 2000 } }

Available LLM Settings

SettingDescriptionRange
modelSpecific modelProvider-dependent
temperatureCreativity0-2
max_tokensResponse lengthModel-dependent
top_pNucleus sampling0-1

Streaming Responses

Get real-time token-by-token responses.

Enabling Streaming

{ "sessionId": "user-123", "message": "Explain quantum computing", "streaming": true }

Handling Streamed Response

The response is sent as Server-Sent Events (SSE):

const response = await fetch('/api/prompt/ID', { method: 'POST', headers: { 'Content-Type': 'application/json', 'X-API-Key': 'po_...' }, body: JSON.stringify({ sessionId: 'user-123', message: 'Hello', streaming: true }) }); const reader = response.body.getReader(); const decoder = new TextDecoder(); while (true) { const { done, value } = await reader.read(); if (done) break; const chunk = decoder.decode(value); console.log(chunk); // Process each token }

Streaming Benefits

BenefitDescription
Faster perceived responseSee text as it generates
Better UXUsers see progress
Long responsesHandle large outputs

Conversation Management

Maintain context across multiple API calls.

Continuing Conversations

Pass conversationId to continue:

{ "sessionId": "user-123", "message": "Tell me more about that", "conversationId": "conv_abc123" }

Using Previous Messages

Pass conversation history manually:

{ "sessionId": "user-123", "message": "What was the third point?", "previousMessages": [ {"role": "user", "content": "List 5 benefits of exercise"}, {"role": "assistant", "content": "Here are 5 benefits..."} ] }

Session ID Best Practices

Use CaseSession ID Pattern
Per-useruser-\{userId\}
Per-sessionsession-{uuid}“
Anonymousanon-{timestamp}“

Security Considerations

API Key Security

DoDon’t
Store keys in environment variablesHardcode in client-side code
Use server-side proxyExpose in browser
Rotate keys periodicallyShare keys publicly
Use HTTPS onlySend over HTTP

Backend Proxy Pattern

Instead of calling API directly from browser:

// Your backend server app.post('/api/chat', async (req, res) => { const response = await fetch('https://promptowl.ai/api/prompt/ID', { method: 'POST', headers: { 'X-API-Key': process.env.PROMPTOWL_API_KEY, 'Content-Type': 'application/json' }, body: JSON.stringify(req.body) }); res.json(await response.json()); });

CORS

PromptOwl API allows cross-origin requests:

  • All origins permitted (*)
  • POST and OPTIONS methods
  • X-API-Key header allowed

Rate Limiting

Note: Contact your administrator or PromptOwl support  to configure rate limiting for your API endpoints.


Best Practices

API Integration

Do:

  • Use a backend proxy
  • Store keys securely
  • Handle errors gracefully
  • Implement retry logic
  • Log API calls for debugging

Don’t:

  • Expose keys in frontend
  • Ignore error responses
  • Skip validation
  • Overload with requests

Embedded Chat

Do:

  • Match brand colors
  • Test on mobile
  • Consider position carefully
  • Provide clear instructions

Don’t:

  • Make chat too small
  • Clash with page colors
  • Block important content
  • Forget mobile users

Performance

Do:

  • Use streaming for long responses
  • Cache when possible
  • Set appropriate token limits
  • Monitor usage

Don’t:

  • Request unnecessarily large responses
  • Poll continuously
  • Ignore token costs

Troubleshooting

”PromptOwl API key not found”

Cause: Missing API key header

Solutions:

  1. Check header name: X-API-Key
  2. Verify key is included in request
  3. Check for typos in key

”Invalid API key”

Cause: Key doesn’t match or is inactive

Solutions:

  1. Verify key is correct
  2. Check key isn’t regenerated
  3. Ensure key is active
  4. Generate new key if needed

”This prompt is not live”

Cause: Prompt is in Draft status

Solutions:

  1. Go to Publish tab
  2. Toggle to Live status
  3. Save changes
  4. Retry API call

Empty or Error Responses

Solutions:

  1. Check request body format
  2. Verify JSON is valid
  3. Ensure required fields present
  4. Check message isn’t empty

Embed Not Loading

Solutions:

  1. Verify prompt ID is correct
  2. Check prompt is Live
  3. Ensure HTTPS on your page
  4. Check browser console for errors

Streaming Not Working

Solutions:

  1. Verify streaming: true in request
  2. Check client handles SSE
  3. Ensure connection isn’t closed early
  4. Test with non-streaming first

Quick Reference

API Endpoint

POST /api/prompt/{promptId} Header: X-API-Key: po_your-key

Minimum Request

{ "sessionId": "user-id", "message": "Your message" }

Full Request Options

{ "sessionId": "string", "message": "string", "previousMessages": [], "variables": `{}`, "llmType": "openai", "llmSettings": `{}`, "streaming": false, "conversationId": "string" }

HTTP Status Codes

CodeMeaning
200Success
400Prompt not live or bad request
401Invalid or missing API key
500Server error

Embed Template

<iframe src="https://promptowl.ai/chatPopup/{sessionId}/{promptId}" width="480" height="860"> </iframe>

Last updated on