Enterprise-grade AI deployment and orchestration platform
Public entry point: https://nexorasim.github.io
Nexora AI Agent Mode is a multi-provider AI orchestration platform designed for enterprise deployments with audit-first, reproducible workflows. The system provides seamless integration with OpenAI, Google Gemini, xAI Grok, and custom AI providers through a unified interface.
This is a monorepo containing:
/agent/): Multi-provider orchestration engine/gui/): Cross-platform application with PyInstaller/web/): Next.js + TypeScript console/docs/): Comprehensive guides and references.github/workflows/): Automated build and deploymentcd agent
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install -r requirements.txt
cp .env.example .env
# Edit .env with your API keys
python -m nexora_agent.cli run "Test prompt"
cd gui
pip install -r requirements.txt
pip install -r ../agent/requirements.txt
python main.py
cd web
npm install
npm run dev
# Open http://localhost:3000
nexorasim.github.io/
├── agent/ # Python AI Agent core
│ ├── nexora_agent/ # Main package
│ │ ├── core/ # Agent orchestration
│ │ ├── providers/ # Provider adapters
│ │ ├── config/ # Configuration management
│ │ └── mcp_cli/ # MCP-style CLI
│ ├── requirements.txt
│ ├── .env.example
│ └── README.md
├── gui/ # Desktop GUI
│ ├── main.py # GUI entry point
│ ├── nexora_agent.spec # PyInstaller config
│ ├── requirements.txt
│ └── README.md
├── web/ # Next.js front-end
│ ├── app/ # Next.js App Router
│ ├── package.json
│ ├── next.config.js
│ └── README.md
├── docs/ # Documentation
│ ├── architecture.md
│ ├── deployment.md
│ ├── agent-mode.md
│ ├── providers.md
│ └── changelog.md
├── .github/
│ └── workflows/ # CI/CD pipelines
│ ├── deploy-web.yml
│ ├── build-agent.yml
│ └── ci.yml
└── README.md # This file
Automatically deployed to GitHub Pages on push to main branch.
Access at: https://nexorasim.github.io
Built automatically on release creation. Download from: GitHub Releases
Install via pip (future) or clone repository and install dependencies.
All components use environment variables for configuration:
# OpenAI
OPENAI_API_KEY=sk-...
OPENAI_MODEL=gpt-4
OPENAI_ENABLED=true
# Google Gemini
GOOGLE_API_KEY=AIza...
GEMINI_MODEL=gemini-pro
GEMINI_ENABLED=true
# xAI Grok
XAI_API_KEY=xai-...
XAI_ENDPOINT=https://api.x.ai/v1
XAI_MODEL=grok-1
XAI_ENABLED=false
# Generic HTTP
GENERIC_API_KEY=...
GENERIC_ENDPOINT=https://your-api.com/v1/completions
GENERIC_ENABLED=false
# Agent Settings
DEFAULT_PROVIDER=openai
FALLBACK_PROVIDERS=google_gemini,xai_grok
MAX_RETRIES=3
REQUEST_TIMEOUT=30
LOG_LEVEL=INFO
See Provider Configuration Guide for details.
# Run with default provider
nexora-agent run "Your prompt here"
# Specify provider
nexora-agent run "Your prompt" --provider google_gemini
# Test provider
nexora-agent test-provider --provider openai
# Check status
nexora-agent status
from nexora_agent import NexoraAgent
agent = NexoraAgent()
response = agent.execute("Your prompt", provider="openai")
if response.success:
print(response.text)
else:
print(f"Error: {response.error}")
# Deploy configuration
nexora-mcp deploy --env prod
# Trigger workflow
nexora-mcp workflow build-agent
# Run tests
nexora-mcp test --component agent
# Health check
nexora-mcp health https://nexorasim.github.io
# Agent tests
cd agent
pytest tests/
# Web tests
cd web
npm test
# Desktop GUI
cd gui
pyinstaller nexora_agent.spec
# Output: dist/NexoraAgent/
cd web
npm run build
# Output: out/
Contributions are welcome. Please:
See individual component README files for specific guidelines.
See LICENSE file for details.
NexoraSIM
| Built with enterprise deployments in mind | NexoraSIM 2025 |