nexorasim

Nexora AI Agent - Provider Configuration Guide

Overview

This guide provides detailed configuration instructions for each supported AI provider in the Nexora Agent Mode system.

General Configuration Pattern

All providers follow a consistent configuration pattern in the .env file:

<PROVIDER>_API_KEY=<your_key>
<PROVIDER>_MODEL=<model_name>
<PROVIDER>_ENABLED=<true|false>

Additional provider-specific settings may be required.

OpenAI Configuration

Obtaining API Key

  1. Go to https://platform.openai.com
  2. Sign up or log in
  3. Navigate to API Keys section
  4. Click “Create new secret key”
  5. Copy the key (it will only be shown once)

Configuration

OPENAI_API_KEY=sk-proj-...
OPENAI_MODEL=gpt-4
OPENAI_ENABLED=true

Available Models

Rate Limits and Quotas

Free Tier:

Pay-as-you-go:

Cost Considerations

Error Codes

Troubleshooting

Issue: “Invalid API key”

# Verify key format (should start with sk-)
echo $OPENAI_API_KEY

# Test with curl
curl https://api.openai.com/v1/models \
  -H "Authorization: Bearer $OPENAI_API_KEY"

Issue: Rate limit errors

# Increase timeout and retries
MAX_RETRIES=5
REQUEST_TIMEOUT=60

# Or upgrade account tier

Google Gemini Configuration

Obtaining API Key

  1. Go to https://makersuite.google.com/app/apikey
  2. Sign in with Google account
  3. Click “Create API Key”
  4. Select or create a Google Cloud project
  5. Copy the generated key

Configuration

GOOGLE_API_KEY=AIza...
GEMINI_MODEL=gemini-pro
GEMINI_ENABLED=true

Available Models

Rate Limits and Quotas

Free Tier:

Paid Tier:

Cost Considerations

Safety Settings

Gemini includes built-in safety filters that may block certain content:

# Custom safety settings (future enhancement)
generation_config = genai.types.GenerationConfig(
    temperature=0.7,
    max_output_tokens=1000,
)

safety_settings = [
    {"category": "HARM_CATEGORY_HARASSMENT", "threshold": "BLOCK_MEDIUM_AND_ABOVE"},
    {"category": "HARM_CATEGORY_HATE_SPEECH", "threshold": "BLOCK_MEDIUM_AND_ABOVE"},
]

Error Codes

Troubleshooting

Issue: “API key not valid”

# Verify API key is enabled
# Check Google Cloud Console > APIs & Services > Credentials

# Ensure Generative Language API is enabled
# Google Cloud Console > APIs & Services > Enabled APIs

Issue: Safety filter blocks

# Review prompt content
# Gemini has stricter safety filters than some providers
# Rephrase prompt or use different provider

xAI Grok Configuration

Obtaining API Key

  1. Contact xAI for API access: https://x.ai
  2. Follow their onboarding process
  3. Receive API key and endpoint information

Note: xAI Grok API access may be limited or require application.

Configuration

XAI_API_KEY=xai-...
XAI_ENDPOINT=https://api.x.ai/v1
XAI_MODEL=grok-1
XAI_ENABLED=true

Available Models

Rate Limits and Quotas

Details depend on xAI account tier:

Cost Considerations

Error Codes

Troubleshooting

Issue: Connection errors

# Verify endpoint URL
echo $XAI_ENDPOINT

# Test connectivity
curl -I $XAI_ENDPOINT

# Check xAI status page for outages

Issue: Authentication failed

# Verify API key format
# Contact xAI support if issues persist

Generic HTTP Provider Configuration

Purpose

The Generic HTTP provider allows integration with any AI API that follows standard HTTP/JSON patterns, including:

Basic Configuration

GENERIC_API_KEY=your_api_key_here
GENERIC_ENDPOINT=https://api.example.com/v1/completions
GENERIC_ENABLED=true

Request Format Expectations

The generic provider sends POST requests with:

{
  "prompt": "User's prompt text",
  "temperature": 0.7,
  "max_tokens": 1000
}

Headers:

Authorization: Bearer <GENERIC_API_KEY>
Content-Type: application/json

Response Format Expectations

The provider looks for response text in these fields (in order):

  1. text
  2. response
  3. output
{
  "text": "Generated response text"
}

or

{
  "response": "Generated response text"
}

Custom Configuration Examples

Anthropic Claude

GENERIC_API_KEY=sk-ant-...
GENERIC_ENDPOINT=https://api.anthropic.com/v1/messages
GENERIC_ENABLED=true

Note: Claude has a different API format. Consider creating a dedicated provider for production use.

Self-Hosted Ollama

GENERIC_API_KEY=not_required
GENERIC_ENDPOINT=http://localhost:11434/api/generate
GENERIC_ENABLED=true

Cohere

GENERIC_API_KEY=<cohere_api_key>
GENERIC_ENDPOINT=https://api.cohere.ai/v1/generate
GENERIC_ENABLED=true

Advanced Usage

Custom Headers:

response = agent.execute(
    prompt="Your prompt",
    provider="generic_http",
    headers={
        "X-Custom-Header": "value",
        "X-Another-Header": "value"
    }
)

Custom Payload:

response = agent.execute(
    prompt="Your prompt",
    provider="generic_http",
    payload={
        "model": "custom-model",
        "prompt": "Your prompt",
        "custom_param": "value"
    }
)

GET Requests:

response = agent.execute(
    prompt="Your prompt",
    provider="generic_http",
    method="GET"
)

Troubleshooting

Issue: “Provider not configured”

# Verify endpoint is accessible
curl -I $GENERIC_ENDPOINT

# Test authentication
curl -X POST $GENERIC_ENDPOINT \
  -H "Authorization: Bearer $GENERIC_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"prompt": "test"}'

Issue: “No response text found”

# Check response format from your API
# Ensure it includes "text", "response", or "output" field

# Example: test your API directly
curl -X POST $GENERIC_ENDPOINT \
  -H "Authorization: Bearer $GENERIC_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"prompt": "hello"}' | jq .

Multi-Provider Strategy

Development:

DEFAULT_PROVIDER=openai
FALLBACK_PROVIDERS=google_gemini
OPENAI_ENABLED=true
GEMINI_ENABLED=true
XAI_ENABLED=false
GENERIC_ENABLED=false

Production (High Availability):

DEFAULT_PROVIDER=openai
FALLBACK_PROVIDERS=google_gemini,xai_grok
OPENAI_ENABLED=true
GEMINI_ENABLED=true
XAI_ENABLED=true
GENERIC_ENABLED=false

Cost-Optimized:

DEFAULT_PROVIDER=google_gemini
FALLBACK_PROVIDERS=openai
OPENAI_MODEL=gpt-3.5-turbo
GEMINI_ENABLED=true
OPENAI_ENABLED=true

Provider Selection Criteria

Choose OpenAI when:

Choose Google Gemini when:

Choose xAI Grok when:

Choose Generic/Custom when:

Configuration Best Practices

Security

  1. Never commit .env files
    echo ".env" >> .gitignore
    
  2. Use environment-specific configs
    .env.dev     # Development
    .env.staging # Staging
    .env.prod    # Production
    
  3. Rotate API keys regularly
    • Set calendar reminders
    • Update in all environments
    • Test after rotation
  4. Use secrets management in CI/CD
    • GitHub Secrets
    • AWS Secrets Manager
    • HashiCorp Vault

Performance

  1. Set appropriate timeouts
    REQUEST_TIMEOUT=30  # Balance speed and reliability
    
  2. Configure retry limits
    MAX_RETRIES=3  # Prevent excessive retries
    
  3. Monitor token usage
    • Track in metadata
    • Set alerts for high usage
    • Implement rate limiting in application

Reliability

  1. Enable multiple providers
    OPENAI_ENABLED=true
    GEMINI_ENABLED=true
    
  2. Configure fallback chain
    FALLBACK_PROVIDERS=google_gemini,xai_grok
    
  3. Test all providers regularly
    nexora-agent test-provider --provider openai
    nexora-agent test-provider --provider google_gemini
    

Cost Management

  1. Use cost-effective models for simple tasks
    OPENAI_MODEL=gpt-3.5-turbo
    
  2. Set token limits
    response = agent.execute(prompt, max_tokens=500)
    
  3. Monitor spending
    • Check provider dashboards
    • Set budget alerts
    • Review usage patterns

Configuration Validation

Validation Checklist

Testing Configuration

# Check provider status
nexora-agent status

# Test each enabled provider
nexora-agent test-provider --provider openai
nexora-agent test-provider --provider google_gemini

# Test fallback chain
# (Temporarily disable default provider to test fallback)

Conclusion

Proper provider configuration is crucial for reliable operation of the Nexora AI Agent Mode system. Follow this guide to configure each provider correctly, implement best practices, and ensure high availability.

For more information: