Trigger.dev AI Processing Setup

This document explains how to use trigger.dev for offloading AI processing from Vercel to dedicated background workers.

Overview

We've moved the following AI-intensive operations to trigger.dev:

  1. SOR Analysis - Long-running document analysis (can take 30+ minutes)
  2. Permit Analysis - Document extraction for permit data
  3. Document Analysis - General document processing

Setup Complete

trigger.dev initialized with project ID: proj_mayvxkeadyyhrdtyoenoTasks created in src/trigger/Client utilities for triggering tasks ✅ API routes updated to use trigger.dev ✅ Development server running ✅ Environment variables fixed for trigger.dev

Environment Variables Required

Make sure these are in your .env file:

# Supabase Configuration
NEXT_PUBLIC_SUPABASE_URL=your_supabase_url
SUPABASE_SERVICE_ROLE_KEY=your_service_role_key

# AI API Keys
GEMINI_API_KEY=your_gemini_key
ANTHROPIC_API_KEY=your_claude_key

⚠️ Important: trigger.dev tasks require the SUPABASE_SERVICE_ROLE_KEY (not the anon key) to access storage without user authentication.

File Structure

src/trigger/
├── example.ts              # Hello world example
├── sor-analysis.ts         # SOR document analysis task
├── permit-analysis.ts      # Permit document analysis task
├── document-analysis.ts    # General document analysis task
├── client.ts              # Client utilities for triggering tasks
└── index.ts               # Export all tasks

utils/supabase/
├── server.ts              # Standard Supabase client (uses cookies)
└── service-role.ts        # Service role client for trigger.dev tasks

app/api/
├── sor-analysis-trigger/   # New trigger.dev-based SOR API
└── task-status/[taskId]/   # Check task status

Usage

1. SOR Analysis with Trigger.dev

Old way (direct Vercel processing):

POST /api/sor-analysis

New way (trigger.dev background processing):

POST /api/sor-analysis-trigger

The new endpoint:

  • Returns immediately with a task ID
  • Processes the document in the background
  • Can handle long-running tasks (up to 1 hour)
  • No Vercel timeout issues

2. Check Task Status

GET /api/task-status/[taskId]

Returns current status of the background task.

3. Triggering Tasks Programmatically

import { triggerSORAnalysis } from '@/src/trigger/client';

const handle = await triggerSORAnalysis({
  fileUrl: 'path/to/uploaded/file.pdf', // Supabase storage path
  fileName: file.name,
  fileSize: file.size,
  fileType: file.type,
  model: 'gemini',
  useChunking: true,
  organizationId: orgId,
  userId: userId
});

console.log('Task ID:', handle.id);

Benefits

🚀 Performance

  • No Vercel timeout limits (up to 1 hour processing)
  • Dedicated compute resources for AI processing
  • Better resource allocation

💰 Cost Optimization

  • Vercel functions only handle API requests (fast)
  • Heavy AI processing on trigger.dev infrastructure
  • Pay only for actual processing time

🔧 Reliability

  • Background processing continues even if user closes browser
  • Automatic retries on failure
  • Better error handling and logging

📊 Monitoring

  • Real-time task status tracking
  • Processing time metrics
  • Error monitoring and alerts

Development

Running Locally

The trigger.dev development server is running in the background:

npx trigger.dev@latest dev

Available Tasks

  1. sor-analysis - SOR document analysis

    • Max duration: 3600 seconds (1 hour)
    • Supports Gemini and Claude models
    • Handles chunked processing
  2. permit-analysis - Permit document extraction

    • Max duration: 1800 seconds (30 minutes)
    • Multi-format support (PDF, images, DOC)
  3. document-analysis - General document processing

    • Max duration: 1800 seconds (30 minutes)
    • Binary and text document support

Troubleshooting

Environment Variable Issues:

  • Make sure SUPABASE_SERVICE_ROLE_KEY is in your .env file
  • The service role key is different from the anon key
  • Get it from: Supabase Dashboard → Settings → API → service_role key

File Access Issues:

  • Tasks use Supabase storage paths, not direct file uploads
  • Files are uploaded to secure-documents bucket first
  • Tasks download files when needed using service role permissions

Production Deployment

  1. Deploy to trigger.dev cloud:

    npx trigger.dev@latest deploy
    
  2. Set environment variables in trigger.dev dashboard

  3. Update API endpoints in your frontend to use:

    • /api/sor-analysis-trigger instead of /api/sor-analysis
    • /api/task-status/[taskId] for checking progress
  4. Set up webhooks (optional) for real-time status updates

Next Steps

  1. Update frontend to use new trigger.dev endpoints ✅
  2. Implement real task status checking using trigger.dev REST API
  3. Add webhooks for real-time updates
  4. Migrate other AI operations (permit analysis, document analysis)
  5. Set up monitoring and alerting

Dashboard

Visit your trigger.dev dashboard: https://cloud.trigger.dev/projects/v3/proj_mayvxkeadyyhrdtyoeno

Support


Migration Checklist

  • Initialize trigger.dev project
  • Create SOR analysis task
  • Create permit analysis task
  • Create document analysis task
  • Create client utilities
  • Create new API routes
  • Run development server
  • Fix environment variable loading
  • Update frontend to use new endpoints
  • Implement real-time status checking
  • Deploy to production
  • Set up monitoring and alerts