Nexora AI Agent Mode is a unified AI orchestration layer that provides seamless integration with multiple AI providers through a consistent interface. The agent automatically handles provider selection, fallback mechanisms, and retry logic to ensure high availability and reliability.
Agent Mode refers to the operational paradigm where the Nexora Agent acts as an intelligent orchestrator that:
Provider Agnostic:
Fault Tolerant:
Configuration Driven:
Audit First:
Models: GPT-4, GPT-3.5-turbo, and others
Configuration:
OPENAI_API_KEY=sk-...
OPENAI_MODEL=gpt-4
OPENAI_ENABLED=true
API Endpoint: https://api.openai.com/v1
Rate Limits:
Best For:
Error Handling:
Models: gemini-pro, gemini-pro-vision
Configuration:
GOOGLE_API_KEY=AIza...
GEMINI_MODEL=gemini-pro
GEMINI_ENABLED=true
API Endpoint: Via google-generativeai SDK
Rate Limits:
Best For:
Error Handling:
Models: grok-1, grok-beta
Configuration:
XAI_API_KEY=xai-...
XAI_ENDPOINT=https://api.x.ai/v1
XAI_MODEL=grok-1
XAI_ENABLED=true
API Endpoint: https://api.x.ai/v1
Rate Limits:
Best For:
Error Handling:
Purpose: Support any custom AI API that follows HTTP/JSON patterns
Configuration:
GENERIC_API_KEY=...
GENERIC_ENDPOINT=https://your-api.com/v1/completions
GENERIC_ENABLED=true
Expected Request Format:
{
"prompt": "Your prompt here",
"temperature": 0.7,
"max_tokens": 1000
}
Expected Response Format:
{
"text": "Generated response",
"output": "Alternative field name",
"response": "Another alternative"
}
Customization: The generic provider can be extended to support custom:
Best For:
Error Handling:
Default Provider:
DEFAULT_PROVIDER=openai
The agent will attempt to use this provider first for all requests.
Fallback Chain:
FALLBACK_PROVIDERS=google_gemini,xai_grok
If the default provider fails, the agent tries each fallback in order.
User Request
↓
1. Try DEFAULT_PROVIDER with retries
↓ (if fails)
2. Try first FALLBACK_PROVIDER with retries
↓ (if fails)
3. Try second FALLBACK_PROVIDER with retries
↓ (if all fail)
Return error with details
MAX_RETRIES=3
REQUEST_TIMEOUT=30
Retry Behavior:
MAX_RETRIES attemptsREQUEST_TIMEOUT secondsUsers can override the default by specifying a provider:
CLI:
python -m nexora_agent.cli run "Your prompt" --provider google_gemini
Python API:
response = agent.execute("Your prompt", provider="xai_grok")
GUI: Select from dropdown menu before executing.
Web Console: Select from provider dropdown in the interface.
Basic Usage:
# Use default provider
nexora-agent run "Explain quantum computing"
# Specify provider
nexora-agent run "Explain quantum computing" --provider google_gemini
# Specify model
nexora-agent run "Write a poem" --provider openai --model gpt-3.5-turbo
Testing:
# Test specific provider
nexora-agent test-provider --provider openai
# Check all provider status
nexora-agent status
# List available providers
nexora-agent list
Basic Usage:
from nexora_agent import NexoraAgent
# Initialize agent with default config
agent = NexoraAgent()
# Execute with default provider
response = agent.execute("What is machine learning?")
if response.success:
print(f"Provider: {response.provider}")
print(f"Response: {response.text}")
print(f"Time: {response.metadata['execution_time']}s")
else:
print(f"Error: {response.error}")
Advanced Usage:
from nexora_agent import NexoraAgent
from nexora_agent.config import ConfigLoader
# Custom configuration
config = ConfigLoader(config_path="custom_config.yaml")
agent = NexoraAgent(config)
# Execute with specific parameters
response = agent.execute(
"Generate a creative story",
provider="openai",
model="gpt-4",
temperature=0.9,
max_tokens=500
)
# Check provider status
status = agent.get_provider_status()
for provider, info in status.items():
print(f"{provider}: enabled={info['enabled']}, configured={info['configured']}")
Workflow:
Features:
Workflow:
Features:
@dataclass
class AgentResponse:
success: bool # True if request succeeded
text: str # Generated text response
provider: str # Provider that handled the request
error: Optional[str] # Error message if failed
metadata: Dict[str, Any] # Additional information
timestamp: datetime # When response was generated
Common Fields:
execution_time: Time taken in secondsmodel: Model used for generationProvider-Specific Fields:
tokens_used: (OpenAI, xAI) Total tokens consumedstatus_code: (Generic HTTP) HTTP response codeSuccess:
AgentResponse(
success=True,
text="Machine learning is a subset of artificial intelligence...",
provider="openai",
error=None,
metadata={
"model": "gpt-4",
"execution_time": 2.34,
"tokens_used": 156
},
timestamp=datetime(2025, 1, 24, 10, 30, 0)
)
Failure:
AgentResponse(
success=False,
text="",
provider="openai",
error="Rate limit exceeded. Try again later.",
metadata={
"execution_time": 0.12
},
timestamp=datetime(2025, 1, 24, 10, 30, 0)
)
.env files: Use .env.example as template.env.dev, .env.prodstatus commandresponse.success: Before using response textSee /docs/architecture.md for detailed instructions on adding new providers.
Quick Overview:
nexora_agent/providers/BaseProviderexecute() methodProviderFactory.env.exampleCustom Configuration Loader:
from nexora_agent.config import ConfigLoader
class MyConfigLoader(ConfigLoader):
def _load_config(self):
# Custom loading logic
config = super()._load_config()
# Modify config
return config
agent = NexoraAgent(config=MyConfigLoader())
Custom Provider:
from nexora_agent.providers import BaseProvider, AgentResponse
class MyCustomProvider(BaseProvider):
def execute(self, prompt, **kwargs):
# Custom implementation
return AgentResponse(
success=True,
text="Custom response",
provider="my_custom"
)
nexora-agent status
nexora-agent test-provider --provider openai
.env filetail -f agent.log
REQUEST_TIMEOUT=60
response = agent.execute(prompt, max_tokens=500)
OPENAI_MODEL=gpt-3.5-turbo
Nexora AI Agent Mode provides a robust, flexible, and reliable way to integrate multiple AI providers into your applications. By abstracting provider complexity and implementing intelligent fallback mechanisms, it ensures high availability and simplifies development.
For more information:
/docs/architecture.md/docs/deployment.md/docs/providers.md/docs/changelog.md