Setup

Configuration & Environment

Complete guide to configuring the AI Agent Automation Platform using environment variables and configuration files.

The .env File

The primary way to configure the backend engine is through environment variables. Copy the.env.example file to .env in the backend directory.

Server Settings

PORTThe port the Express server will listen on (default: 5000).
NODE_ENVSet to 'production' or 'development'.

Database Configuration

MONGO_URIThe connection string for your MongoDB instance.

AI Provider Settings

OLLAMA_HOSTBase URL of your local Ollama instance (e.g., http://localhost:11434). Required when using the 'ollama' provider.
GROQ_API_KEYAPI key for the Groq provider. Required when using Groq-hosted LLM models.
OPENAI_API_KEYAPI key for OpenAI models (e.g., gpt-4o, gpt-4o-mini). Required when using the 'openai' provider.
GEMINI_API_KEYAPI key for Google Gemini models (e.g., gemini-2.5-flash). Required when using the 'gemini' provider.
HF_API_KEYOptional: API key for Hugging Face Inference API. Required only when using the 'huggingface' provider.

Security & Secret

JWT_SECRETA strong secret for signing authentication tokens.

Advanced Configuration

For more granular control, you can modify the configuration objects in src/config/index.ts. This includes settings for:

Worker Settings

Adjust the number of concurrent workflow executions and retry intervals.

Embedding Config

Switch between different embedding models and chunk sizes for RAG.

Logging Policies

Configure log retention periods and verbosity levels.

Scheduler Intervals

Change how often the scheduler polls for pending jobs.

Example .env

.env
# Server
PORT=5000
MONGO_URI=
JWT_SECRET=

# ---------------------------------------
# AI Providers
# Only configure providers you plan to use.
# The system auto-detects availability based on these variables.
# ---------------------------------------

# Cloud Providers
GROQ_API_KEY=
OPENAI_API_KEY=
GEMINI_API_KEY=
HF_API_KEY=

# Local Provider (Ollama)
OLLAMA_HOST=http://localhost:11434

# ---------------------------------------
# Worker Runtime Settings
# ---------------------------------------

WORKER_POLL_INTERVAL_MS=2000   # Poll interval in ms (default: 2000)
WORKER_BATCH_SIZE=1            # Number of tasks to claim at once
WORKER_MAX_ATTEMPTS=3          # Max retry attempts per task

# ---------------------------------------
# Optional: Service Authentication
# ---------------------------------------

WORKER_SERVICE_TOKEN=

Pro-Tip: Run Models Locally with Ollama

You can run LLMs locally using Ollama for faster iteration and zero API costs. Set OLLAMA_HOST in your .env, create an agent with provider ollama, and assign it to your workflow. The system will automatically route execution to your local model.