API Keys and Model Configuration
This guide explains how to configure AI model providers, manage API keys, and customize model settings in PromptOwl .
Table of Contents
- Overview
- Supported AI Providers
- Adding API Keys
- Model Selection
- Model Parameters
- Per-Prompt vs Per-Block Settings
- Model Deprecation
- API Key Security
- Enterprise Controls
- Best Practices
- Troubleshooting
Overview
PromptOwl connects to multiple AI providers, allowing you to use different models for different prompts. You bring your own API keys for each provider you want to use.
How It Works
Add API key → Enable provider → Select model on prompt →
Configure parameters → AI uses your key for requestsKey Concepts
| Concept | Description |
|---|---|
| Provider | Company offering AI models (OpenAI, Anthropic, etc.) |
| Model | Specific AI model version (GPT-4, Claude Sonnet, etc.) |
| API Key | Your authentication credential for the provider |
| Parameters | Settings like temperature and max tokens |
Supported AI Providers
PromptOwl supports six major AI providers:
OpenAI
| Info | Details |
|---|---|
| Models | GPT-4, GPT-4o, GPT-3.5-turbo, O1, O3 |
| Get API Key | platform.openai.com/api-keys |
| Billing | Pay-per-token usage |
Anthropic (Claude)
| Info | Details |
|---|---|
| Models | Claude Opus, Claude Sonnet, Claude Haiku |
| Get API Key | console.anthropic.com |
| Billing | Pay-per-token usage |
Google (Gemini)
| Info | Details |
|---|---|
| Models | Gemini Pro, Gemini Ultra |
| Get API Key | aistudio.google.com |
| Billing | Free tier available, then pay-per-use |
Groq
| Info | Details |
|---|---|
| Models | LLaMA, Mixtral (fast inference) |
| Get API Key | console.groq.com |
| Billing | Free tier available |
Grok (xAI)
| Info | Details |
|---|---|
| Models | Grok models |
| Get API Key | x.ai |
| Billing | Varies |
Adding API Keys
Accessing API Key Settings
- Click Settings in the sidebar (or your profile)
- Navigate to API Keys section
- View all available providers
Adding a Key
- Find the provider section (e.g., OpenAI)
- Paste your API key in the input field
- Click Save
- System validates the key automatically
- Status shows “API key Found” if valid
Validation Process
When you save an API key:
- System makes a test call to the provider
- Validates the key is active and has permissions
- Shows success or error message
- Encrypts and stores valid keys securely
Status Indicators
| Status | Meaning |
|---|---|
| API key Found (green) | Key is valid and saved |
| API key not Found (red) | No key or invalid key |
| Validating… | Currently checking key |
Getting API Keys
OpenAI:
- Go to platform.openai.com
- Sign in or create account
- Navigate to API Keys
- Click “Create new secret key”
- Copy the key immediately (only shown once)
Anthropic:
- Go to console.anthropic.com
- Sign in or create account
- Go to Settings → API Keys
- Click “Create Key”
- Copy and save the key
Google Gemini:
- Go to aistudio.google.com
- Sign in with Google account
- Click “Get API Key”
- Create or select a project
- Copy the generated key
Model Selection
Selecting a Model for a Prompt
- Open a prompt for editing
- Find the Model section
- Click the Provider dropdown
- Select your provider (only enabled providers show)
- Click the Model dropdown
- Select the specific model
Provider Dropdown
Shows only providers where:
- Feature flag is enabled
- You have added a valid API key
Disabled providers appear grayed out with a note to add API key.
Model Dropdown
Shows all models for the selected provider:
- Active models available for selection
- Deprecated models shown with warning badge
- Cannot select deprecated models
Model Information
Each model shows:
- Model name and version
- Deprecation status (if applicable)
- Special notes (e.g., “Tools not supported”)
Model Parameters
Fine-tune model behavior with these parameters.
Available Parameters
| Parameter | Range | Description |
|---|---|---|
| Temperature | 0-2 | Creativity/randomness of responses |
| Max Tokens | Varies | Maximum length of response |
| Top P | 0-1 | Nucleus sampling threshold |
| Frequency Penalty | 0-2 | Reduce word repetition |
| Presence Penalty | 0-2 | Encourage topic diversity |
Accessing Parameters
- In prompt editor, find Model section
- Click LLM Settings to expand
- Adjust sliders or input values
- Changes save automatically
Temperature
Controls randomness in responses:
| Value | Behavior |
|---|---|
| 0 | Deterministic, focused responses |
| 0.7 | Balanced creativity (default) |
| 1.5-2 | Highly creative, varied responses |
Use Cases:
- Low (0-0.3): Factual Q&A, code generation
- Medium (0.5-0.8): General conversation
- High (1.0+): Creative writing, brainstorming
Max Tokens
Limits response length:
| Model Type | Typical Max |
|---|---|
| GPT-4 | 8,192 - 128,000 |
| Claude | 4,096 - 200,000 |
| Gemini | 8,192 - 32,768 |
Note: Higher limits cost more tokens. Set appropriately for your use case.
Top P (Nucleus Sampling)
Alternative to temperature:
| Value | Behavior |
|---|---|
| 0.1 | Very focused, predictable |
| 0.9 | More varied word choices |
| 1.0 | Consider all possibilities |
Tip: Use either temperature OR top_p, not both at extreme values.
Frequency Penalty (OpenAI)
Reduces repetition of words:
| Value | Effect |
|---|---|
| 0 | No penalty (may repeat) |
| 1 | Moderate avoidance |
| 2 | Strong avoidance |
Presence Penalty (OpenAI)
Encourages new topics:
| Value | Effect |
|---|---|
| 0 | Stay on topic |
| 1 | Introduce new concepts |
| 2 | Actively explore new topics |
Parameter Support by Provider
| Parameter | OpenAI | Claude | Gemini | Groq | Grok |
|---|---|---|---|---|---|
| Temperature | Yes | Partial* | Yes | Yes | Yes |
| Max Tokens | Yes | Yes | Yes | Yes | Yes |
| Top P | Yes | Partial* | Yes | Yes | Yes |
| Frequency Penalty | Yes | No | No | No | No |
| Presence Penalty | Yes | No | No | No | No |
*Claude Opus 4.1 and Sonnet 4.5 don’t support temperature/top_p
Per-Prompt vs Per-Block Settings
Prompt-Level Settings
The default model and parameters for the entire prompt:
- Set in the main Model section
- Applies to simple prompts
- Acts as default for sequential prompts
Block-Level Override
In Sequential and Supervisor prompts, each block can override:
- Open a block for editing
- Find Use Page Settings toggle
- Disable to show block-specific settings
- Select different model/parameters
- Block uses its own settings
When to Use Block Overrides
| Scenario | Recommendation |
|---|---|
| Simple analysis step | Use smaller, faster model |
| Creative writing block | Use higher temperature |
| Code generation block | Use low temperature, capable model |
| Summary block | Use cost-effective model |
Example: Mixed Model Workflow
Block 1: Research (GPT-4o + web search tool)
↓
Block 2: Analysis (Claude 3.5 Sonnet - reasoning)
↓
Block 3: Summary (GPT-4o-mini - cost-effective)Model Deprecation
AI providers regularly deprecate older models.
How Deprecation Works
- Provider announces model deprecation
- PromptOwl marks model as deprecated
- Deprecated models show warning badge
- Cannot select deprecated models for new prompts
- Existing prompts with deprecated models show alerts
Deprecation Indicators
| Indicator | Location |
|---|---|
| Red “Deprecated” badge | Model dropdown |
| Warning icon | Prompt card |
| Alert message | Prompt editor |
Handling Deprecated Models
If your prompt uses a deprecated model:
- Open the prompt for editing
- You’ll see a deprecation warning
- Select a new model from the dropdown
- Save the prompt
Preventing Issues
- Regularly review your prompts
- Update models when new versions release
- Test prompts after model changes
API Key Security
Your API keys are protected with multiple security measures.
Encryption
| Measure | Description |
|---|---|
| AES Encryption | Keys encrypted before storage |
| Secret Key | Server-side encryption key |
| Hidden Display | Keys never shown after saving |
Storage
- Keys stored in encrypted format in database
- Only decrypted at runtime when needed
- Each user’s keys isolated
Access Control
- Only you can see/modify your keys
- Keys not shared with team members
- Each user adds their own keys
Best Practices
- Never share your API keys
- Rotate keys periodically
- Use separate keys for different environments
- Monitor usage in provider dashboards
- Set spending limits with providers
Enterprise Controls
Administrators can control model availability.
Feature Flags
Enterprise settings can enable/disable providers:
| Setting | Effect |
|---|---|
showModelSwitcher | Show/hide model selection |
showModelSwitcherInChat | Allow model switching in chat |
showModelInResponse | Display model name in responses |
Provider Visibility
Administrators can control which providers are available:
- Enable/disable specific providers
- Set default models for organization
- Configure recommended settings
Default Models
Set organization defaults:
- Default model for new prompts
- Default concierge model
- Fallback models
Best Practices
Choosing Models
| Use Case | Recommended Model |
|---|---|
| Complex reasoning | GPT-4, Claude Opus |
| Fast responses | GPT-4o-mini, Claude Haiku, Groq |
| Code generation | GPT-4o, Claude Sonnet |
| Creative writing | GPT-4o (high temp), Claude |
| Real-time info | Grok-2 |
| Cost-sensitive | GPT-4o-mini, Claude Haiku |
Parameter Tuning
For Factual/Technical:
- Temperature: 0-0.3
- Top P: 0.9
- Max tokens: As needed
For Creative:
- Temperature: 0.8-1.2
- Top P: 1.0
- Max tokens: Higher limit
For Conversational:
- Temperature: 0.5-0.7
- Balanced penalties
- Moderate token limit
Cost Management
- Use appropriate models - Don’t use GPT-4 for simple tasks
- Set token limits - Prevent unexpectedly long responses
- Monitor usage - Check provider dashboards regularly
- Test efficiently - Use smaller models during development
Multi-Provider Strategy
- Primary provider for main workloads
- Backup provider for redundancy
- Specialized models for specific tasks
- Cost-effective options for high-volume tasks
Troubleshooting
API Key Not Working
- Verify key is correct - Copy again from provider
- Check key permissions - Some keys have restrictions
- Verify billing - Ensure account has credits
- Check key format - Keys start with specific prefixes:
- OpenAI:
sk-... - Anthropic:
sk-ant-... - Google: Various formats
- OpenAI:
Model Not Available
- Check API key - Provider may not be enabled
- Check feature flags - Enterprise may restrict providers
- Check deprecation - Model may be deprecated
- Refresh page - Model list may need updating
Responses Too Short/Long
- Adjust max tokens - Increase for longer responses
- Check prompt - May be requesting brevity
- Model limits - Each model has maximum
Unexpected Responses
- Check temperature - Lower for more predictable
- Review prompt - May need clearer instructions
- Try different model - Some models better for certain tasks
- Check penalties - May be affecting output
High Costs
- Review token usage - Check provider dashboard
- Lower max tokens - Prevent over-generation
- Use efficient models - GPT-3.5 vs GPT-4
- Optimize prompts - Shorter prompts cost less
Provider-Specific Issues
OpenAI:
- Rate limits: Wait and retry
- Context length: Use model with larger context
Anthropic:
- Rate limits: Implement backoff
- No streaming: Check model compatibility
Google:
- Quota limits: Request increase
- Regional restrictions: Check availability
Quick Reference
API Key Locations
| Provider | Where to Get Key |
|---|---|
| OpenAI | platform.openai.com/api-keys |
| Anthropic | console.anthropic.com |
| aistudio.google.com | |
| Groq | console.groq.com |
| xAI | x.ai |
Default Parameter Values
| Parameter | Default |
|---|---|
| Temperature | 1.0 |
| Max Tokens | 4096 |
| Top P | 1.0 |
| Frequency Penalty | 0 |
| Presence Penalty | 0 |
Model Selection Checklist
- API key added for desired provider
- Provider feature enabled
- Model not deprecated
- Parameters appropriate for use case
- Token limits set correctly