API Publishing and Embedding
This guide explains how to publish prompts as APIs, generate API keys, embed chatbots in external applications, and integrate PromptOwl with your systems.
Table of Contents
- Publishing Overview
- Making a Prompt Live
- Generating API Keys
- Using the API
- Embedding Chatbots
- Customizing Embedded Chat
- Variables and Parameters
- Streaming Responses
- Conversation Management
- Security Considerations
- Best Practices
- Troubleshooting
Publishing Overview
PromptOwl allows you to expose your prompts as APIs for external applications.
What You Can Do
| Feature | Description |
|---|---|
| API Access | Call prompts via HTTP POST |
| Embed Chatbot | Add chat widget to any website |
| Custom Variables | Pass runtime parameters |
| Conversation History | Maintain context across calls |
| Model Override | Change LLM settings per request |
Integration Options
| Method | Best For |
|---|---|
| REST API | Backend integrations, apps |
| iFrame Embed | Website chat widgets |
| JavaScript | Custom web implementations |
Making a Prompt Live
Before a prompt can be accessed via API, it must be set to “Live” status.
Publishing a Prompt
- Open your prompt
- Navigate to Publish tab
- Toggle status to Live
- Prompt is now accessible via API
Live vs Draft Status
| Status | API Access | Internal Use |
|---|---|---|
| Live | Enabled | Yes |
| Draft | Blocked | Yes |
Checking Publish Status
- Live prompts show green indicator
- Draft prompts show gray indicator
- Status visible on prompt card and publish page
Note: Non-live prompts return error 400 when called via API.
Generating API Keys
API keys authenticate external requests to your prompts.
Creating an API Key
- Open your prompt
- Go to Publish tab
- Find API Key section
- Click Generate API Key
- Copy the key immediately
API Key Format
po_[64-character-hexadecimal-string]Example:
po_a1b2c3d4e5f6...Important: Save Your Key
| Warning | Details |
|---|---|
| One-time display | Key shown only at generation |
| Cannot retrieve | No way to view key again |
| Regenerate if lost | Creates new key, invalidates old |
Regenerating Keys
If you lose or need to rotate your key:
- Go to Publish tab
- Click Regenerate API Key
- Old key immediately invalidated
- New key generated
- Update all integrations
Key Properties
| Property | Description |
|---|---|
| One per prompt | Single active key per prompt |
| User-bound | Tied to your account |
| Toggleable | Can enable/disable |
| Secure | Stored as hash, never plaintext |
Using the API
Endpoint
POST https://your-domain.com/api/prompt/{promptId}Authentication
Include API key in header:
X-API-Key: po_your-api-key-hereBasic Request
curl -X POST https://promptowl.ai/api/prompt/YOUR_PROMPT_ID \
-H "Content-Type: application/json" \
-H "X-API-Key: po_your-api-key" \
-d '{
"sessionId": "user-123",
"message": "Hello, how can you help me?"
}'Request Body
| Field | Type | Required | Description |
|---|---|---|---|
sessionId | string | Yes | User/session identifier |
message | string | Yes | User’s input message |
previousMessages | array | No | Conversation history |
variables | object | No | Runtime variable values |
llmType | string | No | Override model provider |
llmSettings | object | No | Override model parameters |
streaming | boolean | No | Enable streaming response |
conversationId | string | No | Continue specific conversation |
Response Format
{
"id": "conv_abc123",
"conversationId": "conv_abc123",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant..."
},
{
"role": "user",
"content": "Hello, how can you help me?"
},
{
"role": "assistant",
"content": "Hello! I'm here to help you with..."
}
],
"totalTokenUsed": 150,
"citations": [],
"previousMessages": []
}Response Fields
| Field | Description |
|---|---|
id | Conversation identifier |
conversationId | Same as id, for reference |
messages | Full message history |
totalTokenUsed | Token count for billing |
citations | Source references (if RAG) |
Embedding Chatbots
Embed a chat widget directly in any website.
Getting Embed Code
- Open your prompt
- Go to Publish tab
- Find Chatbot Embed Generator
- Copy the embed code
iFrame Embed
<iframe
width="480px"
height="860px"
src="https://promptowl.ai/chatPopup/SESSION_ID/PROMPT_ID"
style="border: none;">
</iframe>JavaScript Embed
<script>
window.PromptOwlConfig = {
promptId: "YOUR_PROMPT_ID",
sessionId: "user-" + Date.now(),
position: "bottom-right"
};
</script>
<script src="https://promptowl.ai/embed.js"></script>Embed Parameters
| Parameter | Description | Example |
|---|---|---|
SESSION_ID | User identifier | user-123 |
PROMPT_ID | Your prompt ID | abc123def456 |
Customizing Embedded Chat
Color Customization
Configure chat appearance in the publish interface:
| Setting | Description |
|---|---|
| Header Background | Chat header color |
| Header Text | Header text color |
| User Bubble | User message background |
| User Text | User message text color |
Branding Options
| Option | Description |
|---|---|
| Hide Logo | Remove PromptOwl branding |
| Custom Colors | Match your brand |
| Size | Adjust width/height |
Size Recommendations
| Use Case | Width | Height |
|---|---|---|
| Sidebar widget | 350px | 500px |
| Full panel | 480px | 860px |
| Mobile | 100% | 100% |
Variables and Parameters
Pass dynamic values to your prompts at runtime.
Using Variables
Include variables in your API request:
{
"sessionId": "user-123",
"message": "What's my account status?",
"variables": {
"user_name": "John Smith",
"account_id": "ACC-12345",
"subscription": "Premium"
}
}Variable Syntax
In your prompt, reference variables with:
Hello `{user_name}`, your account `{account_id}` is on the `{subscription}` plan.Overriding LLM Settings
Change model settings per request:
{
"sessionId": "user-123",
"message": "Write a creative story",
"llmType": "openai",
"llmSettings": {
"model": "gpt-4",
"temperature": 1.2,
"max_tokens": 2000
}
}Available LLM Settings
| Setting | Description | Range |
|---|---|---|
model | Specific model | Provider-dependent |
temperature | Creativity | 0-2 |
max_tokens | Response length | Model-dependent |
top_p | Nucleus sampling | 0-1 |
Streaming Responses
Get real-time token-by-token responses.
Enabling Streaming
{
"sessionId": "user-123",
"message": "Explain quantum computing",
"streaming": true
}Handling Streamed Response
The response is sent as Server-Sent Events (SSE):
const response = await fetch('/api/prompt/ID', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': 'po_...'
},
body: JSON.stringify({
sessionId: 'user-123',
message: 'Hello',
streaming: true
})
});
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
console.log(chunk); // Process each token
}Streaming Benefits
| Benefit | Description |
|---|---|
| Faster perceived response | See text as it generates |
| Better UX | Users see progress |
| Long responses | Handle large outputs |
Conversation Management
Maintain context across multiple API calls.
Continuing Conversations
Pass conversationId to continue:
{
"sessionId": "user-123",
"message": "Tell me more about that",
"conversationId": "conv_abc123"
}Using Previous Messages
Pass conversation history manually:
{
"sessionId": "user-123",
"message": "What was the third point?",
"previousMessages": [
{"role": "user", "content": "List 5 benefits of exercise"},
{"role": "assistant", "content": "Here are 5 benefits..."}
]
}Session ID Best Practices
| Use Case | Session ID Pattern |
|---|---|
| Per-user | user-\{userId\} |
| Per-session | session-{uuid}“ |
| Anonymous | anon-{timestamp}“ |
Security Considerations
API Key Security
| Do | Don’t |
|---|---|
| Store keys in environment variables | Hardcode in client-side code |
| Use server-side proxy | Expose in browser |
| Rotate keys periodically | Share keys publicly |
| Use HTTPS only | Send over HTTP |
Backend Proxy Pattern
Instead of calling API directly from browser:
// Your backend server
app.post('/api/chat', async (req, res) => {
const response = await fetch('https://promptowl.ai/api/prompt/ID', {
method: 'POST',
headers: {
'X-API-Key': process.env.PROMPTOWL_API_KEY,
'Content-Type': 'application/json'
},
body: JSON.stringify(req.body)
});
res.json(await response.json());
});CORS
PromptOwl API allows cross-origin requests:
- All origins permitted (
*) - POST and OPTIONS methods
- X-API-Key header allowed
Rate Limiting
Note: Contact your administrator or PromptOwl support to configure rate limiting for your API endpoints.
Best Practices
API Integration
Do:
- Use a backend proxy
- Store keys securely
- Handle errors gracefully
- Implement retry logic
- Log API calls for debugging
Don’t:
- Expose keys in frontend
- Ignore error responses
- Skip validation
- Overload with requests
Embedded Chat
Do:
- Match brand colors
- Test on mobile
- Consider position carefully
- Provide clear instructions
Don’t:
- Make chat too small
- Clash with page colors
- Block important content
- Forget mobile users
Performance
Do:
- Use streaming for long responses
- Cache when possible
- Set appropriate token limits
- Monitor usage
Don’t:
- Request unnecessarily large responses
- Poll continuously
- Ignore token costs
Troubleshooting
”PromptOwl API key not found”
Cause: Missing API key header
Solutions:
- Check header name:
X-API-Key - Verify key is included in request
- Check for typos in key
”Invalid API key”
Cause: Key doesn’t match or is inactive
Solutions:
- Verify key is correct
- Check key isn’t regenerated
- Ensure key is active
- Generate new key if needed
”This prompt is not live”
Cause: Prompt is in Draft status
Solutions:
- Go to Publish tab
- Toggle to Live status
- Save changes
- Retry API call
Empty or Error Responses
Solutions:
- Check request body format
- Verify JSON is valid
- Ensure required fields present
- Check message isn’t empty
Embed Not Loading
Solutions:
- Verify prompt ID is correct
- Check prompt is Live
- Ensure HTTPS on your page
- Check browser console for errors
Streaming Not Working
Solutions:
- Verify
streaming: truein request - Check client handles SSE
- Ensure connection isn’t closed early
- Test with non-streaming first
Quick Reference
API Endpoint
POST /api/prompt/{promptId}
Header: X-API-Key: po_your-keyMinimum Request
{
"sessionId": "user-id",
"message": "Your message"
}Full Request Options
{
"sessionId": "string",
"message": "string",
"previousMessages": [],
"variables": `{}`,
"llmType": "openai",
"llmSettings": `{}`,
"streaming": false,
"conversationId": "string"
}HTTP Status Codes
| Code | Meaning |
|---|---|
| 200 | Success |
| 400 | Prompt not live or bad request |
| 401 | Invalid or missing API key |
| 500 | Server error |
Embed Template
<iframe
src="https://promptowl.ai/chatPopup/{sessionId}/{promptId}"
width="480" height="860">
</iframe>