Skip to content

Frequently Asked Questions

General Questions

What is the AI Ops App?

The AI Ops App is a Nautobot plugin that integrates Large Language Models (LLMs) from Azure OpenAI with Nautobot to provide AI-powered assistance for network operations tasks. It uses the LangChain and LangGraph frameworks to create conversational AI agents that can be extended with Model Context Protocol (MCP) servers.

What Azure OpenAI models are supported?

The app supports any Azure OpenAI deployment, including: - GPT-4 - GPT-4o (Optimized) - GPT-4-turbo - GPT-3.5-turbo - Any custom Azure OpenAI deployments

You configure these through the LLM Models interface in Nautobot.

Do I need Azure OpenAI to use this app?

Yes, the app currently requires an Azure OpenAI service subscription and at least one deployed model. The app does not support OpenAI's public API directly, only Azure OpenAI endpoints.

What is MCP (Model Context Protocol)?

MCP (Model Context Protocol) is a protocol that allows AI agents to connect to external services and tools. MCP servers provide additional capabilities to the AI agent, such as access to databases, APIs, file systems, or custom business logic.

MCP servers are optional but extend the agent's capabilities beyond basic conversation.

Installation and Configuration

How do I get started with the app?

  1. Install the app via pip: pip install nautobot-ai-ops
  2. Add "ai_ops" to the PLUGINS list in nautobot_config.py
  3. Run nautobot-server post_upgrade
  4. Restart Nautobot services
  5. Configure at least one LLM Model through the UI
  6. Access the AI Chat Assistant from the menu

See the Installation Guide for detailed instructions.

Where should I store Azure OpenAI API keys?

In production environments, always store API keys in Nautobot Secrets:

  1. Navigate to Secrets > Secrets in Nautobot
  2. Create a new Secret with your API key
  3. Reference the Secret name in your LLM Model configuration

In LAB/development environments, you can use environment variables, but Secrets are recommended for all environments.

How do I configure multiple LLM models?

  1. Navigate to AI Platform > Configuration > LLM Models
  2. Create each model with its specific configuration
  3. Mark one model as "default" by checking the "Is Default" checkbox
  4. The default model is used when no specific model is requested

Different models can be used for different purposes (e.g., fast model for quick queries, detailed model for analysis).

What Redis configuration is required?

The app uses Redis for conversation checkpointing. Configuration:

  • Uses the same Redis instance as Nautobot's cache and Celery
  • Requires a separate database number (default: DB 2)
  • Configure via environment variables:
  • NAUTOBOT_REDIS_HOST
  • NAUTOBOT_REDIS_PORT
  • NAUTOBOT_REDIS_PASSWORD
  • LANGGRAPH_REDIS_DB (defaults to "2")

No additional Redis infrastructure needed beyond what Nautobot already uses.

Usage Questions

How do I use the AI Chat Assistant?

  1. Navigate to AI Platform > Chat & Assistance > AI Chat Assistant
  2. Type your question in the input box
  3. Press Enter or click Send
  4. The AI responds with assistance
  5. Continue the conversation - context is maintained

See Getting Started for detailed usage instructions.

Does the AI remember previous messages?

Yes, the app maintains conversation history within a session:

  • Each browser session has a unique thread ID
  • Messages within that session are remembered
  • Context is maintained across multiple turns
  • Starting a new session (new browser tab, clearing cookies) starts fresh

How long is conversation history kept?

Conversation history is stored in Redis with configurable retention:

  • By default, checkpoints older than 30 days are eligible for cleanup
  • Run the "Cleanup Old Checkpoints" job to remove old conversations
  • Schedule the job to run automatically (recommended: daily or weekly)
  • Active conversations are never removed

Can I use the AI agent via API?

Yes, the app provides a REST API endpoint for programmatic access:

POST /plugins/ai-ops/api/chat/
Content-Type: application/json

{
  "message": "Your question here"
}

See External Interactions for API documentation and examples.

What permissions are required to use the app?

  • ai_ops.view_llmmodel - View LLM models
  • ai_ops.add_llmmodel - Create LLM models
  • ai_ops.change_llmmodel - Edit LLM models
  • ai_ops.delete_llmmodel - Delete LLM models
  • ai_ops.view_mcpserver - View MCP servers (also grants chat access)
  • ai_ops.add_mcpserver - Create MCP servers
  • ai_ops.change_mcpserver - Edit MCP servers
  • ai_ops.delete_mcpserver - Delete MCP servers

Typically, users need view_mcpserver permission to access the chat interface.

MCP Server Questions

What are MCP servers used for?

MCP servers extend the AI agent's capabilities by providing:

  • Additional tools the agent can use
  • Access to external systems and APIs
  • Custom business logic and workflows
  • Integration with internal services

Examples: code execution, file access, database queries, monitoring system integration.

Are MCP servers required?

No, MCP servers are optional. The AI Chat Assistant works without them, providing conversational assistance based on the LLM's training. MCP servers are only needed if you want to extend the agent with additional capabilities.

How do I know if an MCP server is working?

Check the MCP server status:

  1. Navigate to AI Platform > Configuration > MCP Servers
  2. Check the Status column
  3. "Healthy" status means the server is working
  4. "Failed" status indicates connection issues

Health checks run automatically. Failed servers are excluded from agent operations.

Can I use external MCP servers?

Yes, you can configure both:

  • Internal MCP servers: Hosted within your infrastructure
  • External MCP servers: Third-party or cloud-hosted services

Set the "MCP Type" field accordingly when creating the server configuration.

What protocols do MCP servers use?

The app supports:

  • HTTP: RESTful MCP servers (most common)
  • STDIO: Process-based MCP servers

Most deployments use HTTP MCP servers.

Troubleshooting

The chat interface isn't responding. What should I check?

  1. Verify LLM Model Configuration:
  2. At least one model exists
  3. One model is marked as default
  4. API credentials are correct

  5. Check Nautobot Logs:

  6. Look for error messages
  7. Check for API key or connection issues

  8. Test Azure OpenAI Connectivity:

  9. Verify the endpoint URL is accessible
  10. Confirm API key has proper permissions
  11. Check for Azure service issues

  12. Verify Permissions:

  13. User has ai_ops.view_mcpserver permission

My MCP server shows "Failed" status. How do I fix it?

  1. Verify URL Accessibility:
  2. Test the URL from the Nautobot server
  3. Ensure network connectivity
  4. Check firewall rules

  5. Check Health Endpoint:

  6. Verify the health check path is correct
  7. Test: curl https://mcp-server.example.com/health
  8. Health check should return HTTP 200

  9. Review Server Logs:

  10. Check MCP server logs for errors
  11. Look for authentication issues
  12. Verify server is running

  13. Update Status Manually (if needed):

  14. Edit the MCP server
  15. Change status to "Maintenance" while troubleshooting
  16. Change back to "Healthy" when fixed

Conversation history is not persisting. What's wrong?

  1. Check Redis Configuration:
  2. Verify Redis is running
  3. Test Redis connectivity
  4. Check LANGGRAPH_REDIS_DB setting

  5. Review Environment Variables:

  6. NAUTOBOT_REDIS_HOST
  7. NAUTOBOT_REDIS_PORT
  8. NAUTOBOT_REDIS_PASSWORD

  9. Check Redis Database:

  10. Ensure database number is not in use by another service
  11. Default: DB 2 (DB 0: cache, DB 1: Celery)

  12. Review Logs:

  13. Look for checkpoint-related errors
  14. Check Redis connection errors

I'm getting Azure OpenAI rate limit errors. What should I do?

Azure OpenAI has rate limits based on your subscription:

  1. Check Azure Portal:
  2. Review your quota and rate limits
  3. Monitor current usage

  4. Request Quota Increase:

  5. Submit a request in Azure Portal
  6. Specify your use case and needs

  7. Optimize Usage:

  8. Use lower temperature for deterministic responses (faster)
  9. Configure multiple models to distribute load
  10. Implement retry logic in custom integrations

  11. Contact Azure Support:

  12. For persistent rate limit issues
  13. To discuss enterprise quotas

The AI is giving incorrect or outdated information. Why?

LLM models have limitations:

  1. Training Data Cutoff:
  2. Models are trained on data up to a certain date
  3. They don't have real-time information
  4. Check your model's training data date

  5. Hallucinations:

  6. LLMs can generate plausible but incorrect information
  7. Always verify critical information
  8. Use MCP servers for real-time data access

  9. Context Limitations:

  10. Very long conversations may exceed context window
  11. Start a new conversation for fresh context
  12. Break complex tasks into smaller conversations

  13. Model Selection:

  14. Different models have different capabilities
  15. GPT-4 is generally more accurate than GPT-3.5
  16. Adjust model selection based on task requirements

Why isn't my custom system prompt being used?

If your custom system prompt isn't being applied to the AI agent:

  1. Check Prompt Status:
  2. Navigate to AI Platform > LLM > System Prompts
  3. Verify your prompt has status "Approved"
  4. Only approved prompts are used by agents

  5. Check Prompt Assignment:

  6. If you assigned the prompt to a specific model, verify the assignment
  7. Edit the LLM Model and confirm the System Prompt dropdown selection
  8. The model must be the one being used (check if it's the default)

  9. Verify Prompt Content:

  10. For database prompts: Ensure prompt_text is not empty
  11. For file-based prompts: Verify prompt_file_name matches the file

  12. Check Fallback Behavior:

  13. If no model-specific prompt is assigned, the agent uses global file-based prompts
  14. If no approved prompts exist, the agent falls back to code-based defaults

  15. Review Logs:

  16. Check Nautobot logs for prompt loading messages
  17. Look for errors like "Failed to load file-based prompt"

How do I know which prompt my model is using?

To determine the active system prompt:

  1. Check Nautobot Logs:
  2. Enable DEBUG logging for ai_ops
  3. Look for log messages: "Using system prompt: [name] (model=[model_name])"
  4. Log messages indicate which prompt was loaded

  5. Check Model Configuration:

  6. Navigate to AI Platform > Configuration > LLM Models
  7. View the model's detail page
  8. The "System Prompt" field shows any assigned prompt

  9. Understand the Fallback Hierarchy:

  10. Priority 1: Model's assigned prompt (if status is "Approved")
  11. Priority 2: Global file-based prompt (is_file_based=True, status="Approved")
  12. Priority 3: Code fallback (get_multi_mcp_system_prompt())

  13. Test via Chat:

  14. Ask the AI "What is your role?" or similar questions
  15. The response should reflect the active prompt's instructions

Performance Questions

How can I improve response times?

  1. Use Faster Models:
  2. GPT-4-turbo is faster than GPT-4
  3. GPT-3.5-turbo is fastest but less capable

  4. Optimize Temperature:

  5. Lower temperature (0.0-0.3) can be faster
  6. Higher temperature requires more generation time

  7. Reduce MCP Server Count:

  8. Fewer MCP servers = faster tool discovery
  9. Disable unused MCP servers

  10. Monitor Azure Performance:

  11. Check Azure OpenAI service status
  12. Review your region selection
  13. Consider deploying models in multiple regions

How much does it cost to run this app?

Costs depend on Azure OpenAI usage:

  1. Azure OpenAI Charges:
  2. Pay per token (input and output)
  3. Varies by model (GPT-4 more expensive than GPT-3.5)
  4. Check Azure OpenAI pricing page

  5. Infrastructure Costs:

  6. Nautobot hosting (unchanged)
  7. Redis (minimal - shared with existing Nautobot Redis)
  8. MCP servers (if self-hosted)

  9. Cost Optimization:

  10. Use GPT-3.5-turbo for simple queries
  11. Reserve GPT-4 for complex tasks
  12. Monitor usage in Azure Portal
  13. Set up budget alerts

Advanced Questions

Can I customize the AI agent's behavior?

Yes! The AI Ops App provides a System Prompt Management feature that allows you to customize agent behavior directly from the Nautobot UI:

  1. Navigate to AI Platform > LLM > System Prompts
  2. Create a new prompt with your desired instructions
  3. Set the status to "Approved" to activate it
  4. Optionally assign it to a specific LLM Model

Key Features: - Template Variables: Use {current_date}, {current_month}, and {model_name} for dynamic content - Version Tracking: Automatic versioning when prompt text changes - Status Workflow: Only "Approved" prompts are used by agents - Model-Specific: Assign different prompts to different models

For advanced customization, you can also create file-based prompts in ai_ops/prompts/ for version control.

See the System Prompt Configuration Guide for detailed instructions.

Can I use different LLM providers besides Azure OpenAI?

Yes! The AI Ops App supports multiple LLM providers:

  • Ollama: Free, local LLM inference (great for development)
  • OpenAI: GPT-4, GPT-4o, GPT-3.5-turbo
  • Azure OpenAI: Enterprise-grade with Microsoft SLAs
  • Anthropic: Claude 3 models
  • HuggingFace: Open-source models
  • Custom: Implement your own provider

See the LLM Provider Configuration Guide for setup instructions.

How is conversation data secured?

Security measures:

  1. API Keys: Stored in Nautobot Secrets
  2. Conversation Data: Stored in Redis (encrypted in transit)
  3. Access Control: Nautobot permission system
  4. Audit Trails: All actions logged in Nautobot

Review the External Interactions security section for details.

Can I deploy the app in an air-gapped environment?

Partial air-gapped deployment is possible:

Possible: - Install app from pip package (downloaded elsewhere) - Use self-hosted MCP servers - Internal Redis

Not Possible Without Workarounds: - Requires connectivity to Azure OpenAI endpoints - Azure OpenAI doesn't support on-premises deployment

Workarounds: - Use Azure Private Link for Azure OpenAI - Configure proxy for Azure connectivity - Consider Azure Government Cloud for sensitive environments

How do I backup conversation history?

Conversation history is stored in Redis:

  1. Redis Backup:
  2. Use Redis persistence (RDB or AOF)
  3. Regular Redis backups
  4. Include LANGGRAPH_REDIS_DB in backup scope

  5. Cleanup Considerations:

  6. Old conversations removed by cleanup job
  7. Backup before running cleanup if history is important

  8. Alternative Storage:

  9. For permanent archival, consider custom development
  10. Export conversations to file or database
  11. Not built-in to current version

Getting Help

Where can I find more documentation?

How do I report bugs or request features?

Open an issue on the GitHub repository:

  • Repository: kvncampos/nautobot-ai-ops
  • Include version information
  • Provide reproduction steps
  • Include relevant logs (sanitize sensitive data)

Where can I get support?

  • Internal Team: See Authors and Maintainers for contact information
  • GitHub Issues: For bugs and feature requests on GitHub
  • Nautobot Community: For general Nautobot questions

Can I contribute to the project?

Yes! Contributions are welcome:

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Submit a pull request

See the Contributing Guide for details.