A full-stack Nuxt.js application for managing AI agents that integrate with Chatwoot instances. This system enables organizations to deploy multiple AI agents that can automatically respond to customer inquiries via chat and email using Prediction Guard's secure AI models.
⚠️ Note: This Quick Start guide is for development environment only. For production deployment, see the Production Deployment section below.
git clone <repository-url>
cd agent-ai-servernpm install# Copy the environment template
cp env.example .env
# Edit the .env file with your configuration
nano .envRequired environment variables:
# Database
MONGODB_URI=mongodb://localhost:27017/agent-ai-server
# JWT Configuration
JWT_SECRET=your-super-secret-jwt-key
JWT_REFRESH_SECRET=your-super-secret-refresh-key
JWT_EXPIRE=24h
JWT_REFRESH_EXPIRE=7d
# Chatwoot Integration (optional)
CHATWOOT_URL=https://your-chatwoot-instance.com
CHATWOOT_API_TOKEN=your-chatwoot-api-token
# Application
APP_NAME=Agent AI Server
NODE_ENV=developmentnpm run create-adminThis will create an admin user with:
- Email: [email protected]
- Password: AdminPassword123
- Role: admin
npm run devFor agents to use context from documents, you need Qdrant vector database running. The embedding model will automatically load when the server starts.
🎯 Quick Setup (Recommended for Development)
- Download and run Qdrant binary:
# Download Qdrant for macOS
curl -L https://github.com/qdrant/qdrant/releases/latest/download/qdrant-x86_64-apple-darwin.tar.gz -o qdrant.tar.gz
# Extract and run
tar -xzf qdrant.tar.gz
chmod +x qdrant
./qdrant &- Verify Qdrant is running:
curl http://localhost:6333/collections
# Should return: {"result":{"collections":[]},"status":"ok","time":...}💡 Managing Qdrant Process
# Check if Qdrant is running
ps aux | grep qdrant | grep -v grep
# Stop Qdrant (find process ID first)
kill $(ps aux | grep "./qdrant" | grep -v grep | awk '{print $2}')
# Restart Qdrant
./qdrant &🐳 Alternative: Docker Options
Option A: Using Docker directly
docker run -d \
--name agent-ai-qdrant \
-p 6333:6333 \
-v qdrant_data:/qdrant/storage \
qdrant/qdrant:latestOption B: Using Docker Compose (if available)
# Start the Qdrant vector database
docker compose up -d qdrant🔧 Environment Configuration
Make sure your .env file includes:
QDRANT_URL=http://localhost:6333✅ Verify Setup
- Check Qdrant:
curl http://localhost:6333/collections - Check your server:
curl http://localhost:3000/api/health - Check embedding model: Visit Settings page in your dashboard - it should show "Embedding Model: Loaded"
🚀 Complete Verification Script
echo "🔍 Checking Qdrant..."
curl -s http://localhost:6333/collections && echo " ✅ Qdrant is running"
echo "🔍 Checking server..."
curl -s http://localhost:3000/api/health && echo " ✅ Server is running"
echo "🔍 Checking processes..."
ps aux | grep -E "(qdrant|npm run dev)" | grep -v grep📊 What Happens During Startup
When you start your development server (npm run dev), you'll see:
✅ MongoDB Connected: localhost
Embedding model not loaded, attempting initialization...
Loading multilingual embedding model...
✅ Multilingual embedding model loaded successfully
The embedding model (Xenova/all-MiniLM-L12-v2) downloads automatically (~100MB) on first use and is cached locally.
🔧 Troubleshooting RAG Setup
| Issue | Solution |
|---|---|
| "Embedding model failed to load" | Ensure you have at least 500MB free disk space and stable internet connection for the initial model download |
| "Qdrant not connected" | Check if Qdrant is running: curl http://localhost:6333/collections |
| "No process running on port 6333" | Start Qdrant using one of the methods above |
| Model downloads slowly | First-time setup downloads ~100MB. Subsequent startups are fast as the model is cached |
| "Cannot connect to Qdrant" | Verify QDRANT_URL=http://localhost:6333 is in your .env file |
For more detailed troubleshooting, see RAG_SETUP.md.
For production deployment, you have two options:
Use the automated deployment script for the easiest setup:
# Step 1: Clone repository to the required production directory
sudo mkdir -p /opt/agent-ai
sudo chown $USER:$USER /opt/agent-ai
git clone <repository-url> /opt/agent-ai
cd /opt/agent-ai
# Step 2: Make deployment script executable and run it
chmod +x deploy.sh
./deploy.sh production/opt/agent-ai for the deployment script to work correctly. The script expects this specific directory structure and will validate the location before proceeding.
The deployment script will automatically:
- Create backups of existing data
- Setup environment from production template
- Generate secure JWT secrets
- Configure SSL certificates (Let's Encrypt or self-signed)
- Deploy using Docker Compose with MongoDB, Qdrant, and Nginx
- Create admin user automatically with default credentials
- Display access information and management commands
For more control over the deployment process, follow the detailed step-by-step guide:
📖 See DEPLOYMENT.md for complete manual deployment instructions
This includes:
- Server setup and prerequisites
- Repository cloning to the correct location (
/opt/agent-ai) - SSL certificate configuration
- Docker Compose service management
- Monitoring and maintenance procedures
- Troubleshooting common issues
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.