Skip to content

AI Ops


CI Status Docs Deploy Status PyPI Version Python Versions
GitHub Pages PyPI Downloads License
An App for Nautobot.

Overview

The AI Ops app brings advanced artificial intelligence capabilities to Nautobot through a flexible multi-provider architecture and the Model Context Protocol (MCP). This app provides an intelligent chat assistant that can interact with your Nautobot environment, external MCP servers, and other integrated systems to help automate operational tasks, answer questions, and provide insights based on your network infrastructure data.

At its core, AI Ops leverages LangGraph and LangChain to orchestrate conversations with Large Language Models (LLMs) from multiple providers (Ollama, OpenAI, Azure AI, Anthropic, HuggingFace, and custom providers), maintaining conversation context through checkpointed sessions stored in Redis. The app supports flexible LLM provider and model management, allowing administrators to define multiple providers and models for various use cases. A powerful middleware system enables request/response processing with features like caching, logging, validation, and retry logic. The multi-MCP server architecture enables the AI assistant to connect to both internal and external MCP servers, providing extensible tool access for network automation, data retrieval, and operational workflows. Production-ready features include automated health monitoring for MCP servers, middleware cache management, automatic status tracking, conversation persistence, and scheduled maintenance tasks to maintain optimal performance.

Note: This project is actively evolving. We're continuously adding new features, providers, and capabilities. Check the Release Notes for the latest updates and the GitHub Issues for upcoming features.

Key Features

  • Multi-Provider LLM Support: Use models from Ollama (local), OpenAI, Azure AI, Anthropic, HuggingFace, or implement custom providers
  • LLM Provider Management: Configure and manage multiple LLM providers with provider-specific settings and handler classes
  • LLM Model Management: Configure multiple models from different providers with varying capabilities, temperature settings, and configurations
  • Middleware System: Apply middleware chains to models for caching, logging, validation, retry logic, rate limiting, and custom processing
  • Priority-Based Middleware Execution: Control middleware execution order (1-100) with pre and post-processing phases
  • AI Chat Assistant: Interactive chat interface that understands and responds to natural language queries about your Nautobot environment
  • MCP Server Integration: Connect to internal and external Model Context Protocol servers to extend capabilities with custom tools
  • Automated Health Monitoring: Scheduled health checks for MCP servers with retry logic and automatic cache invalidation
  • Conversation Persistence: Checkpoint-based conversation management using Redis ensures context is maintained across sessions
  • Secure Configuration: API keys and credentials managed through Nautobot's Secret objects, never stored directly
  • Scheduled Tasks: Background jobs for checkpoint cleanup, MCP server health monitoring, and middleware cache management
  • RESTful API: Full API support for programmatic access to all models (providers, models, middleware, MCP servers)
  • Environment-Aware: Supports LAB (local development with Ollama), NONPROD, and PROD environments

More screenshots and detailed use cases can be found in the Using the App page in the documentation.

Requirements

  • Nautobot 2.4.22+
  • Python 3.10 - 3.12
  • Redis (for conversation checkpointing and caching)
  • At least one LLM provider:
  • Ollama (local, free): For development and testing
  • OpenAI API: For OpenAI models (requires API key)
  • Azure OpenAI: For Azure-hosted models (requires subscription)
  • Anthropic API: For Claude models (requires API key)
  • HuggingFace: For HuggingFace models (requires API key)
  • Custom: Implement your own provider handler
  • Optional: MCP servers for extended functionality

Documentation

Full documentation for this App can be found at kvncampos.github.io/nautobot-ai-ops:

Contributing to the Documentation

You can find all the Markdown source for the App documentation under the docs folder in this repository. For simple edits, a Markdown capable editor is sufficient: clone the repository and edit away.

If you need to view the fully-generated documentation site, you can build it with MkDocs. A container hosting the documentation can be started using the invoke commands (details in the Development Environment Guide) on http://localhost:8001. Using this container, as your changes to the documentation are saved, they will be automatically rebuilt and any pages currently being viewed will be reloaded in your browser.

Any PRs with fixes or improvements are very welcome!

Questions

For any questions or comments, please check the FAQ first. Feel free to also swing by the Network to Code Slack (channel #nautobot), sign up here if you don't have an account.