Skip to main content

🔵 OpenAI Models

OpenAI is the industry leader in AI language models, offering the most widely-used and well-documented AI APIs. Their models excel at general-purpose tasks, coding, reasoning, and creative writing.

🏆 Why Choose OpenAI?

OpenAI Advantages

📊 Available Models

ModelContext WindowBest ForInput CostOutput Cost
GPT-4 Turbo128K tokensComplex reasoning, coding$0.01/1K tokens$0.03/1K tokens
GPT-48K tokensAdvanced tasks$0.03/1K tokens$0.06/1K tokens
GPT-3.5 Turbo16K tokensGeneral purpose, cost-effective$0.0015/1K tokens$0.002/1K tokens
GPT-3.5 Turbo 16K16K tokensLonger conversations$0.003/1K tokens$0.004/1K tokens

🚀 Getting Started

Step 1: Create OpenAI Account

  1. Visit OpenAI Platform
  2. Sign up with your email or Google/Microsoft account
  3. Verify your email address
  4. Complete identity verification (required for API access)

Step 2: Add Payment Method

OpenAI Payment Setup

Important: OpenAI requires a valid payment method to use the API, even for free tier usage.

Step 3: Generate API Key

  1. Navigate to API Keys
  2. Click "Create new secret key"
  3. Give it a descriptive name (e.g., "MCP for WP Production")
  4. Copy the key immediately (you won't see it again)
  5. Store it securely (use a password manager)

Step 4: Configure in MCP for WP

  1. Go to MCP for WP > Settings
  2. Find the "OpenAI API Key" field
  3. Paste your API key
  4. Click "Test Connection" to verify
  5. Save settings

⚙️ Model Configuration

Default Settings

json
{
  "model": "gpt-3.5-turbo",
  "max_tokens": 1000,
  "temperature": 0.7,
  "top_p": 1.0,
  "frequency_penalty": 0.0,
  "presence_penalty": 0.0
}

Parameter Guide

Model Selection

  • gpt-4-turbo: Best for complex reasoning, coding, analysis
  • gpt-4: High-quality responses, smaller context
  • gpt-3.5-turbo: Cost-effective, good for most tasks
  • gpt-3.5-turbo-16k: Longer conversations, same quality

Max Tokens

  • Range: 1 to 4096 (GPT-3.5) or 4096 (GPT-4)
  • Recommendation: Start with 1000, adjust based on needs
  • Cost Impact: Higher values = more expensive responses

Temperature

  • Range: 0.0 to 2.0
  • 0.0: Deterministic, consistent responses
  • 0.7: Balanced creativity and consistency
  • 1.0+: More creative, varied responses

Top P (Nucleus Sampling)

  • Range: 0.0 to 1.0
  • 1.0: Consider all tokens equally
  • 0.9: Consider top 90% of probability mass
  • Use with temperature for fine-tuned control

Frequency Penalty

  • Range: -2.0 to 2.0
  • Positive values: Reduce repetition
  • Negative values: Allow more repetition
  • 0.0: No penalty

Presence Penalty

  • Range: -2.0 to 2.0
  • Positive values: Encourage new topics
  • Negative values: Stay on current topic
  • 0.0: No penalty

💰 Pricing & Usage

Cost Calculator

OpenAI Cost Calculation

Usage Limits

PlanRate LimitMonthly Limit
Free Tier3 requests/minute$5 credit/month
Pay-as-you-go3,500 requests/minuteNo limit
EnterpriseCustomCustom

Cost Optimization Tips

  1. Use GPT-3.5 Turbo for most tasks
  2. Set appropriate max_tokens to avoid over-generation
  3. Use lower temperature for factual tasks
  4. Monitor usage in OpenAI dashboard
  5. Implement caching for repeated requests

🔧 Advanced Configuration

Custom System Messages

json
{
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful coding assistant specialized in WordPress development."
    },
    {
      "role": "user",
      "content": "How do I create a custom post type?"
    }
  ]
}

Function Calling

json
{
  "model": "gpt-4-turbo",
  "messages": [
    {
      "role": "user",
      "content": "What's the weather in New York?"
    }
  ],
  "functions": [
    {
      "name": "get_weather",
      "description": "Get current weather for a location",
      "parameters": {
        "type": "object",
        "properties": {
          "location": {
            "type": "string",
            "description": "City name"
          }
        },
        "required": ["location"]
      }
    }
  ]
}

Streaming Responses

javascript
// For real-time responses
const response = await openai.chat.completions.create({
  model: "gpt-3.5-turbo",
  messages: [{ role: "user", content: "Tell me a story" }],
  stream: true
});

🛠️ Use Cases & Examples

Content Generation

Tool Configuration:

json
{
  "input_schema": {
    "type": "object",
    "properties": {
      "topic": {
        "type": "string",
        "description": "Blog post topic"
      },
      "tone": {
        "type": "string",
        "enum": ["professional", "casual", "technical"],
        "description": "Writing tone"
      },
      "length": {
        "type": "string",
        "enum": ["short", "medium", "long"],
        "description": "Content length"
      }
    },
    "required": ["topic"]
  }
}

Recommended Settings:

  • Model: gpt-4-turbo
  • Temperature: 0.7
  • Max Tokens: 2000

Code Generation

Tool Configuration:

json
{
  "input_schema": {
    "type": "object",
    "properties": {
      "language": {
        "type": "string",
        "description": "Programming language"
      },
      "task": {
        "type": "string",
        "description": "What the code should do"
      },
      "framework": {
        "type": "string",
        "description": "Framework (optional)"
      }
    },
    "required": ["language", "task"]
  }
}

Recommended Settings:

  • Model: gpt-4-turbo
  • Temperature: 0.1
  • Max Tokens: 1500

Data Analysis

Tool Configuration:

json
{
  "input_schema": {
    "type": "object",
    "properties": {
      "data": {
        "type": "string",
        "description": "Data to analyze"
      },
      "analysis_type": {
        "type": "string",
        "enum": ["summary", "trends", "insights"],
        "description": "Type of analysis"
      }
    },
    "required": ["data", "analysis_type"]
  }
}

Recommended Settings:

  • Model: gpt-4-turbo
  • Temperature: 0.3
  • Max Tokens: 1000

🔍 Troubleshooting

Common Issues

"Invalid API Key" Error

  • Cause: Incorrect or expired API key
  • Solution: Regenerate API key in OpenAI dashboard
  • Prevention: Store keys securely, rotate regularly

"Rate Limit Exceeded" Error

  • Cause: Too many requests per minute
  • Solution: Implement exponential backoff
  • Prevention: Monitor usage, implement rate limiting

"Context Length Exceeded" Error

  • Cause: Input too long for model context
  • Solution: Reduce input length or use larger context model
  • Prevention: Set appropriate max_tokens

"Model Not Found" Error

  • Cause: Model name typo or unavailable model
  • Solution: Check model name spelling
  • Prevention: Use model list from API

Debugging Tips

  1. Check API key in OpenAI dashboard
  2. Monitor usage and billing
  3. Review request logs in MCP for WP
  4. Test with OpenAI Playground first
  5. Check rate limits and quotas

📚 Additional Resources

Official Documentation

Community Resources

Pricing & Billing

🔐 Security Best Practices

  1. Never expose API keys in client-side code
  2. Use environment variables for key storage
  3. Implement rate limiting to prevent abuse
  4. Monitor usage for unusual patterns
  5. Rotate keys regularly
  6. Use least privilege access

📞 Support

OpenAI Support

MCP for WP Support

  • Documentation: This guide and related pages
  • GitHub Issues: Report bugs or request features
  • Community: Join our Discord for help

Ready to get started? Configure your OpenAI integration or explore other providers!