Skip to content

Installing the App in Nautobot

Here you will find detailed instructions on how to install and configure the AI Ops App within your Nautobot environment.

Prerequisites

  • The app is compatible with Nautobot 2.4.0 and higher (tested up to 3.x.x).
  • Databases supported: PostgreSQL, MySQL
  • Redis instance required for conversation checkpointing and LangGraph integration
  • At least one LLM provider configured (Ollama, OpenAI, Azure AI, Anthropic, HuggingFace, or Custom)

Note

Please check the dedicated page for a full compatibility matrix and the deprecation policy.

Access Requirements

The app requires external access to:

  • LLM Providers: Depending on your configured provider
  • Azure OpenAI: *.openai.azure.com (HTTPS)
  • OpenAI: api.openai.com (HTTPS)
  • Anthropic: api.anthropic.com (HTTPS)
  • HuggingFace: api-inference.huggingface.co (HTTPS)
  • Ollama: Local or network-accessible Ollama instance
  • Custom: Your custom LLM provider endpoint

  • MCP Servers (if configured): Depends on your MCP server setup

  • HTTP/STDIO based servers as configured

Install Guide

Note

Apps can be installed from the Python Package Index or locally. See the Nautobot documentation for more details. The pip package name for this app is nautobot-ai-ops.

The app is available as a Python package via PyPI and can be installed with pip:

pip install nautobot-ai-ops

To ensure AI Ops is automatically re-installed during future upgrades, create a file named local_requirements.txt (if not already existing) in the Nautobot root directory (alongside requirements.txt) and list the nautobot-ai-ops package:

echo nautobot-ai-ops >> local_requirements.txt

Once installed, the app needs to be enabled in your Nautobot configuration. The following block of code below shows the additional configuration required to be added to your nautobot_config.py file:

  • Append "ai_ops" to the PLUGINS list.
  • Append the "ai_ops" dictionary to the PLUGINS_CONFIG dictionary and override any defaults.
# In your nautobot_config.py
PLUGINS = ["ai_ops"]

# PLUGINS_CONFIG = {
#   "ai_ops": {
#     ADD YOUR SETTINGS HERE
#   }
# }

Once the Nautobot configuration is updated, run the Post Upgrade command (nautobot-server post_upgrade) to run migrations and clear any cache:

nautobot-server post_upgrade

Then restart (if necessary) the Nautobot services which may include:

  • Nautobot
  • Nautobot Workers
  • Nautobot Scheduler
sudo systemctl restart nautobot nautobot-worker nautobot-scheduler

App Configuration

The AI Ops App requires minimal configuration in nautobot_config.py. The app uses Nautobot's existing infrastructure (PostgreSQL, Redis) and supports optional configuration for chat session management.

# In your nautobot_config.py
PLUGINS = ["ai_ops"]

PLUGINS_CONFIG = {
    "ai_ops": {
        # Optional: Configure chat session TTL (Time-To-Live) in minutes
        # Chat sessions expire after this period of inactivity or message age
        # Default: 5 minutes
        "chat_session_ttl_minutes": 5,

        # Optional: Configure checkpoint retention for cleanup jobs
        # Used by Redis/PostgreSQL checkpoint cleanup (if migrated from MemorySaver)
        # Default: 7 days
        "checkpoint_retention_days": 7,
    }
}

Configuration Options

chat_session_ttl_minutes

Controls how long chat sessions persist before automatic cleanup:

  • Frontend: Messages in browser localStorage are filtered on page load
  • Backend: MemorySaver checkpoints are cleaned up every 5 minutes via scheduled job
  • Inactivity Timer: Frontend auto-clears chat after this period of no user activity
  • Default: 5 minutes
  • Grace Period: Adds 30 seconds to prevent race conditions between frontend/backend

Example Use Cases: - Development/Testing: Set to 1-2 minutes for faster cleanup - Production: Set to 10-15 minutes for better user experience - High-security environments: Set to 5 minutes or less

# Example: Extend session to 15 minutes for production
PLUGINS_CONFIG = {
    "ai_ops": {
        "chat_session_ttl_minutes": 15,
    }
}

checkpoint_retention_days

Controls retention for persistent checkpoint storage (Redis/PostgreSQL):

  • Current Implementation: Uses MemorySaver (in-memory, session-based)
  • Future Use: When migrated to Redis Stack or PostgreSQL checkpointing
  • Default: 7 days
  • Note: Not currently enforced with MemorySaver implementation

Adjusting Cleanup Schedule

The chat session cleanup job runs every 5 minutes by default. This is separate from the TTL configuration:

  • Cleanup Schedule: How often the job checks for expired sessions
  • TTL (chat_session_ttl_minutes): What age of sessions to delete

When to Adjust:

If you increase chat_session_ttl_minutes significantly (e.g., 1+ hours), you should adjust the cleanup schedule to avoid unnecessary checks:

  1. Navigate to Jobs > Scheduled Jobs
  2. Find "Chat Session Cleanup"
  3. Click Edit
  4. Update the Crontab schedule:
  5. For 1-hour TTL: 0 * * * * (hourly)
  6. For 6-hour TTL: 0 */6 * * * (every 6 hours)
  7. For 24-hour TTL: 0 0 * * * (daily)
  8. Click Update

Recommendation: Keep cleanup frequency at or below your TTL value for optimal memory management.

Database Configuration

The app automatically creates all required tables during the migration process. No manual database configuration is needed beyond Nautobot's standard setup.

Redis Configuration

The app uses Redis for conversation checkpointing through LangGraph. Ensure Redis is configured in your nautobot_config.py:

# Redis configuration (shared with Nautobot)
CACHES = {
    "default": {
        "BACKEND": "django_redis.cache.RedisCache",
        "LOCATION": "redis://localhost:6379/0",
        "OPTIONS": {
            "CLIENT_CLASS": "django_redis.client.DefaultClient",
        },
    }
}

# Celery configuration (uses Redis)
CELERY_BROKER_URL = "redis://localhost:6379/1"

Environment Variables

For production deployments, configure the following environment variables:

Variable Required Description Example
NAUTOBOT_REDIS_HOST Yes Redis server hostname localhost or redis.internal.com
NAUTOBOT_REDIS_PORT Yes Redis server port 6379
NAUTOBOT_REDIS_PASSWORD No Redis password (if required) your-secure-password
LANGGRAPH_REDIS_DB No Redis database number for checkpoints 2 (default)

LAB Environment Variables (Development Only)

For local development (LAB environment), you can use environment variables instead of database configuration:

Variable Required Description Example
AZURE_OPENAI_API_KEY Yes (LAB) Azure OpenAI API key your-api-key
AZURE_OPENAI_ENDPOINT Yes (LAB) Azure OpenAI endpoint URL https://your-resource.openai.azure.com/
AZURE_OPENAI_DEPLOYMENT_NAME Yes (LAB) Model deployment name gpt-4o
AZURE_OPENAI_API_VERSION No API version 2024-02-15-preview

Production Configuration

In production (NONPROD/PROD environments), LLM models should be configured through the Nautobot UI rather than environment variables. See Post-Installation Configuration for configuration steps.

Post-Installation Configuration

After installing the app, you need to configure the LLM providers and models. The configuration follows this hierarchy:

  1. LLM Providers - Define available provider types (Ollama, OpenAI, Azure AI, Anthropic, HuggingFace, Custom)
  2. LLM Models - Create specific model deployments using a provider
  3. Middleware Types (Optional) - Define middleware for request/response processing
  4. LLM Middleware (Optional) - Apply middleware to specific models
  5. MCP Servers (Optional) - Configure Model Context Protocol servers for extended capabilities

1. Create LLM Provider

First, define which LLM provider you'll use:

  1. Navigate to AI Platform > LLM > LLM Providers
  2. Click + Add
  3. Select a provider:
  4. Name: Choose from Ollama, OpenAI, Azure AI, Anthropic, HuggingFace, or Custom
  5. Description: Optional description of the provider setup
  6. Documentation URL: Optional link to provider documentation
  7. Config Schema: Provider-specific configuration (e.g., Azure API version, base URL)
  8. Is Enabled: Check to enable this provider

2. Create LLM Model

Create at least one LLM Model for your selected provider:

  1. Navigate to AI Platform > LLM > LLM Models
  2. Click + Add
  3. Configure:
  4. LLM Provider: Select the provider created in step 1
  5. Name: Model name (e.g., gpt-4o, llama2, or your Azure deployment name)
  6. Description: Optional description of the model's capabilities
  7. Model Secret Key: (Production only) Name of Nautobot Secret containing API key
  8. Endpoint: Model endpoint URL (required for some providers)
  9. API Version: API version string (e.g., 2024-02-15-preview for Azure)
  10. Is Default: Check to make this the default model
  11. Temperature: Model temperature setting (0.0-2.0, default 0.0)
  12. Cache TTL: Cache duration in seconds (minimum 60)

3. Create Secrets (Production Only)

For production deployments, store API keys securely in Nautobot Secrets:

  1. Navigate to Secrets > Secrets
  2. Click + Add
  3. Create a new Secret:
  4. Name: azure_gpt4_api_key (or your chosen name - must match Model Secret Key)
  5. Provider: Choose appropriate provider
  6. Value: Your Azure OpenAI API key (or other provider credentials)
  7. Reference this Secret name in your LLM Model's Model Secret Key field

4. Configure Middleware Types (Optional)

To add request/response processing capabilities:

  1. Navigate to AI Platform > Middleware > Middleware Types
  2. Click + Add
  3. Define middleware:
  4. Name: Middleware class name (e.g., SummarizationMiddleware, auto-suffixed with "Middleware")
  5. Is Custom: Check if this is a custom middleware (unchecked for built-in LangChain middleware)
  6. Description: What this middleware does

5. Apply LLM Middleware (Optional)

Apply middleware to specific models:

  1. Navigate to AI Platform > Middleware > LLM Middleware
  2. Click + Add
  3. Configure:
  4. LLM Model: Select the model to apply middleware to
  5. Middleware: Select the middleware type
  6. Config: JSON configuration for the middleware
  7. Config Version: LangChain version compatibility (default: 1.1.0)
  8. Is Active: Check to enable this middleware
  9. Is Critical: Check if initialization should fail if this middleware fails
  10. Priority: Execution order (1-100, lower executes first)

6. Configure MCP Servers (Optional)

To extend agent capabilities with Model Context Protocol servers:

  1. Navigate to AI Platform > Middleware > MCP Servers
  2. Click + Add
  3. Configure:
  4. Name: Unique name for the server
  5. Status: Set to Active (or use other status options)
  6. Protocol: STDIO or HTTP
  7. URL: Base URL for the MCP server
  8. MCP Endpoint: Path to MCP endpoint (default: /mcp)
  9. Health Check: Path to health check endpoint (default: /health)
  10. Description: Optional description
  11. MCP Type: Internal or External
  12. Click the health check button to verify server connectivity

To prevent Redis checkpoint storage from growing indefinitely:

  1. Navigate to Jobs > Jobs
  2. Find AI Agents > Cleanup Old Checkpoints
  3. Click Schedule Job
  4. Configure to run daily or weekly

Configuration Examples

Local Development (LAB Environment)

For local development, you can skip database configuration and use environment variables:

# .env file
AZURE_OPENAI_API_KEY=your-api-key
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/
AZURE_OPENAI_DEPLOYMENT_NAME=gpt-4o
AZURE_OPENAI_API_VERSION=2024-02-15-preview

The app automatically detects LAB environment based on hostname and uses these variables.

Production (NONPROD/PROD Environment)

For production deployments: - Configure LLM Providers and Models through the Nautobot UI - Store API keys in Nautobot Secrets (not environment variables) - Configure Redis for conversation checkpointing - Set up multiple models for redundancy

See Getting Started for additional configuration instructions and best practices.