Skip to Content
Enterprise GuidesAPI Keys & Model Config

API Keys and Model Configuration

This guide explains how to configure AI model providers, manage API keys, and customize model settings in PromptOwl .


Table of Contents

  1. Overview
  2. Supported AI Providers
  3. Adding API Keys
  4. Model Selection
  5. Model Parameters
  6. Per-Prompt vs Per-Block Settings
  7. Model Deprecation
  8. API Key Security
  9. Enterprise Controls
  10. Best Practices
  11. Troubleshooting

Overview

PromptOwl connects to multiple AI providers, allowing you to use different models for different prompts. You bring your own API keys for each provider you want to use.

How It Works

Add API key → Enable provider → Select model on prompt → Configure parameters → AI uses your key for requests

Key Concepts

ConceptDescription
ProviderCompany offering AI models (OpenAI, Anthropic, etc.)
ModelSpecific AI model version (GPT-4, Claude Sonnet, etc.)
API KeyYour authentication credential for the provider
ParametersSettings like temperature and max tokens

Supported AI Providers

PromptOwl supports six major AI providers:

OpenAI

InfoDetails
ModelsGPT-4, GPT-4o, GPT-3.5-turbo, O1, O3
Get API Keyplatform.openai.com/api-keys 
BillingPay-per-token usage

Anthropic (Claude)

InfoDetails
ModelsClaude Opus, Claude Sonnet, Claude Haiku
Get API Keyconsole.anthropic.com 
BillingPay-per-token usage

Google (Gemini)

InfoDetails
ModelsGemini Pro, Gemini Ultra
Get API Keyaistudio.google.com 
BillingFree tier available, then pay-per-use

Groq

InfoDetails
ModelsLLaMA, Mixtral (fast inference)
Get API Keyconsole.groq.com 
BillingFree tier available

Grok (xAI)

InfoDetails
ModelsGrok models
Get API Keyx.ai 
BillingVaries

Adding API Keys

Accessing API Key Settings

  1. Click Settings in the sidebar (or your profile)
  2. Navigate to API Keys section
  3. View all available providers

Adding a Key

  1. Find the provider section (e.g., OpenAI)
  2. Paste your API key in the input field
  3. Click Save
  4. System validates the key automatically
  5. Status shows “API key Found” if valid

Validation Process

When you save an API key:

  1. System makes a test call to the provider
  2. Validates the key is active and has permissions
  3. Shows success or error message
  4. Encrypts and stores valid keys securely

Status Indicators

StatusMeaning
API key Found (green)Key is valid and saved
API key not Found (red)No key or invalid key
Validating…Currently checking key

Getting API Keys

OpenAI:

  1. Go to platform.openai.com 
  2. Sign in or create account
  3. Navigate to API Keys
  4. Click “Create new secret key”
  5. Copy the key immediately (only shown once)

Anthropic:

  1. Go to console.anthropic.com 
  2. Sign in or create account
  3. Go to Settings → API Keys
  4. Click “Create Key”
  5. Copy and save the key

Google Gemini:

  1. Go to aistudio.google.com 
  2. Sign in with Google account
  3. Click “Get API Key”
  4. Create or select a project
  5. Copy the generated key

Model Selection

Selecting a Model for a Prompt

  1. Open a prompt for editing
  2. Find the Model section
  3. Click the Provider dropdown
  4. Select your provider (only enabled providers show)
  5. Click the Model dropdown
  6. Select the specific model

Provider Dropdown

Shows only providers where:

  • Feature flag is enabled
  • You have added a valid API key

Disabled providers appear grayed out with a note to add API key.

Model Dropdown

Shows all models for the selected provider:

  • Active models available for selection
  • Deprecated models shown with warning badge
  • Cannot select deprecated models

Model Information

Each model shows:

  • Model name and version
  • Deprecation status (if applicable)
  • Special notes (e.g., “Tools not supported”)

Model Parameters

Fine-tune model behavior with these parameters.

Available Parameters

ParameterRangeDescription
Temperature0-2Creativity/randomness of responses
Max TokensVariesMaximum length of response
Top P0-1Nucleus sampling threshold
Frequency Penalty0-2Reduce word repetition
Presence Penalty0-2Encourage topic diversity

Accessing Parameters

  1. In prompt editor, find Model section
  2. Click LLM Settings to expand
  3. Adjust sliders or input values
  4. Changes save automatically

Temperature

Controls randomness in responses:

ValueBehavior
0Deterministic, focused responses
0.7Balanced creativity (default)
1.5-2Highly creative, varied responses

Use Cases:

  • Low (0-0.3): Factual Q&A, code generation
  • Medium (0.5-0.8): General conversation
  • High (1.0+): Creative writing, brainstorming

Max Tokens

Limits response length:

Model TypeTypical Max
GPT-48,192 - 128,000
Claude4,096 - 200,000
Gemini8,192 - 32,768

Note: Higher limits cost more tokens. Set appropriately for your use case.

Top P (Nucleus Sampling)

Alternative to temperature:

ValueBehavior
0.1Very focused, predictable
0.9More varied word choices
1.0Consider all possibilities

Tip: Use either temperature OR top_p, not both at extreme values.

Frequency Penalty (OpenAI)

Reduces repetition of words:

ValueEffect
0No penalty (may repeat)
1Moderate avoidance
2Strong avoidance

Presence Penalty (OpenAI)

Encourages new topics:

ValueEffect
0Stay on topic
1Introduce new concepts
2Actively explore new topics

Parameter Support by Provider

ParameterOpenAIClaudeGeminiGroqGrok
TemperatureYesPartial*YesYesYes
Max TokensYesYesYesYesYes
Top PYesPartial*YesYesYes
Frequency PenaltyYesNoNoNoNo
Presence PenaltyYesNoNoNoNo

*Claude Opus 4.1 and Sonnet 4.5 don’t support temperature/top_p


Per-Prompt vs Per-Block Settings

Prompt-Level Settings

The default model and parameters for the entire prompt:

  • Set in the main Model section
  • Applies to simple prompts
  • Acts as default for sequential prompts

Block-Level Override

In Sequential and Supervisor prompts, each block can override:

  1. Open a block for editing
  2. Find Use Page Settings toggle
  3. Disable to show block-specific settings
  4. Select different model/parameters
  5. Block uses its own settings

When to Use Block Overrides

ScenarioRecommendation
Simple analysis stepUse smaller, faster model
Creative writing blockUse higher temperature
Code generation blockUse low temperature, capable model
Summary blockUse cost-effective model

Example: Mixed Model Workflow

Block 1: Research (GPT-4o + web search tool) Block 2: Analysis (Claude 3.5 Sonnet - reasoning) Block 3: Summary (GPT-4o-mini - cost-effective)

Model Deprecation

AI providers regularly deprecate older models.

How Deprecation Works

  1. Provider announces model deprecation
  2. PromptOwl marks model as deprecated
  3. Deprecated models show warning badge
  4. Cannot select deprecated models for new prompts
  5. Existing prompts with deprecated models show alerts

Deprecation Indicators

IndicatorLocation
Red “Deprecated” badgeModel dropdown
Warning iconPrompt card
Alert messagePrompt editor

Handling Deprecated Models

If your prompt uses a deprecated model:

  1. Open the prompt for editing
  2. You’ll see a deprecation warning
  3. Select a new model from the dropdown
  4. Save the prompt

Preventing Issues

  • Regularly review your prompts
  • Update models when new versions release
  • Test prompts after model changes

API Key Security

Your API keys are protected with multiple security measures.

Encryption

MeasureDescription
AES EncryptionKeys encrypted before storage
Secret KeyServer-side encryption key
Hidden DisplayKeys never shown after saving

Storage

  • Keys stored in encrypted format in database
  • Only decrypted at runtime when needed
  • Each user’s keys isolated

Access Control

  • Only you can see/modify your keys
  • Keys not shared with team members
  • Each user adds their own keys

Best Practices

  1. Never share your API keys
  2. Rotate keys periodically
  3. Use separate keys for different environments
  4. Monitor usage in provider dashboards
  5. Set spending limits with providers

Enterprise Controls

Administrators can control model availability.

Feature Flags

Enterprise settings can enable/disable providers:

SettingEffect
showModelSwitcherShow/hide model selection
showModelSwitcherInChatAllow model switching in chat
showModelInResponseDisplay model name in responses

Provider Visibility

Administrators can control which providers are available:

  • Enable/disable specific providers
  • Set default models for organization
  • Configure recommended settings

Default Models

Set organization defaults:

  • Default model for new prompts
  • Default concierge model
  • Fallback models

Best Practices

Choosing Models

Use CaseRecommended Model
Complex reasoningGPT-4, Claude Opus
Fast responsesGPT-4o-mini, Claude Haiku, Groq
Code generationGPT-4o, Claude Sonnet
Creative writingGPT-4o (high temp), Claude
Real-time infoGrok-2
Cost-sensitiveGPT-4o-mini, Claude Haiku

Parameter Tuning

For Factual/Technical:

  • Temperature: 0-0.3
  • Top P: 0.9
  • Max tokens: As needed

For Creative:

  • Temperature: 0.8-1.2
  • Top P: 1.0
  • Max tokens: Higher limit

For Conversational:

  • Temperature: 0.5-0.7
  • Balanced penalties
  • Moderate token limit

Cost Management

  1. Use appropriate models - Don’t use GPT-4 for simple tasks
  2. Set token limits - Prevent unexpectedly long responses
  3. Monitor usage - Check provider dashboards regularly
  4. Test efficiently - Use smaller models during development

Multi-Provider Strategy

  1. Primary provider for main workloads
  2. Backup provider for redundancy
  3. Specialized models for specific tasks
  4. Cost-effective options for high-volume tasks

Troubleshooting

API Key Not Working

  1. Verify key is correct - Copy again from provider
  2. Check key permissions - Some keys have restrictions
  3. Verify billing - Ensure account has credits
  4. Check key format - Keys start with specific prefixes:
    • OpenAI: sk-...
    • Anthropic: sk-ant-...
    • Google: Various formats

Model Not Available

  1. Check API key - Provider may not be enabled
  2. Check feature flags - Enterprise may restrict providers
  3. Check deprecation - Model may be deprecated
  4. Refresh page - Model list may need updating

Responses Too Short/Long

  1. Adjust max tokens - Increase for longer responses
  2. Check prompt - May be requesting brevity
  3. Model limits - Each model has maximum

Unexpected Responses

  1. Check temperature - Lower for more predictable
  2. Review prompt - May need clearer instructions
  3. Try different model - Some models better for certain tasks
  4. Check penalties - May be affecting output

High Costs

  1. Review token usage - Check provider dashboard
  2. Lower max tokens - Prevent over-generation
  3. Use efficient models - GPT-3.5 vs GPT-4
  4. Optimize prompts - Shorter prompts cost less

Provider-Specific Issues

OpenAI:

  • Rate limits: Wait and retry
  • Context length: Use model with larger context

Anthropic:

  • Rate limits: Implement backoff
  • No streaming: Check model compatibility

Google:

  • Quota limits: Request increase
  • Regional restrictions: Check availability

Quick Reference

API Key Locations

ProviderWhere to Get Key
OpenAIplatform.openai.com/api-keys
Anthropicconsole.anthropic.com
Googleaistudio.google.com
Groqconsole.groq.com
xAIx.ai

Default Parameter Values

ParameterDefault
Temperature1.0
Max Tokens4096
Top P1.0
Frequency Penalty0
Presence Penalty0

Model Selection Checklist

  • API key added for desired provider
  • Provider feature enabled
  • Model not deprecated
  • Parameters appropriate for use case
  • Token limits set correctly

Last updated on