About Me

My photo
I am an MCSE in Data Management and Analytics, specializing in MS SQL Server, and an MCP in Azure. With over 19+ years of experience in the IT industry, I bring expertise in data management, Azure Cloud, Data Center Migration, Infrastructure Architecture planning, as well as Virtualization and automation. I have a deep passion for driving innovation through infrastructure automation, particularly using Terraform for efficient provisioning. If you're looking for guidance on automating your infrastructure or have questions about Azure, SQL Server, or cloud migration, feel free to reach out. I often write to capture my own experiences and insights for future reference, but I hope that sharing these experiences through my blog will help others on their journey as well. Thank you for reading!

Supercharge Your Terraform Development with Claude Code: Agent + MCP Setup Guide

 


Introduction

Writing infrastructure as code with Terraform requires deep knowledge of provider APIs, best practices, security patterns, and HCL syntax. What if you could have an expert Terraform developer assisting you directly in your terminal, with access to your actual infrastructure and codebase?

This guide shows you how to configure Claude Code with a specialized Terraform agent and MCP (Model Context Protocol) servers to dramatically improve your infrastructure development workflow.

What You'll Get

By the end of this setup, you'll be able to:

  • Write Terraform code faster with expert guidance on HCL syntax and patterns
  • Access your existing infrastructure repos directly through Claude
  • Get real-time best practice recommendations for security, cost, and performance
  • Debug Terraform errors with context-aware assistance
  • Review and refactor existing infrastructure code with AI-powered insights
  • Query cloud provider documentation without leaving your terminal

Prerequisites

Before you begin, ensure you have:

  • Claude Code installed (Installation Guide)
  • Node.js 18+ installed (node --version)
  • Access to your Terraform projects/repositories
  • (Optional) GitHub Personal Access Token for repo access
  • (Optional) Cloud provider credentials for validation

Part 1: Creating the Terraform Expert Agent

Agents in Claude Code are specialized personas that customize Claude's behavior for specific tasks. Think of them as expert colleagues focused on particular domains.

Step 1: Create the Agent

In your terminal, run:

claude code /agents

Or access the agents menu through your Claude Code interface.

Step 2: Configure the Agent

Agent Identifier (slug):

terraform-iac-expert

Agent Description:

Terraform Infrastructure as Code Expert

This agent is a specialized Terraform developer that writes, reviews, and maintains infrastructure as code using Terraform and OpenTofu. It should be used when you need to:

Primary Use Cases:
- Create new Terraform configurations from scratch for cloud infrastructure (AWS, Azure, GCP, multi-cloud)
- Refactor existing Terraform code to improve structure, modularity, and best practices
- Debug Terraform errors, plan failures, or state file issues
- Write reusable Terraform modules with proper variable definitions, outputs, and documentation
- Convert manual infrastructure setups or CloudFormation/ARM templates to Terraform
- Implement infrastructure changes following GitOps workflows
- Review and optimize Terraform code for security, cost, and performance
- Generate terraform.tfvars files and manage environment-specific configurations

Technical Capabilities:
- Write idiomatic HCL (HashiCorp Configuration Language) following official style guidelines
- Implement proper resource dependencies, data sources, and lifecycle management
- Use Terraform state management best practices (remote backends, state locking, workspaces)
- Apply security best practices (least privilege IAM, encryption, secret management)
- Create well-structured modules with clear interfaces and comprehensive variable validation
- Implement dynamic blocks, for_each, count, and conditional logic appropriately
- Use locals, functions, and expressions to make code DRY and maintainable
- Handle provider configurations, version constraints, and provider aliasing
- Implement proper tagging strategies and naming conventions
- Generate comprehensive documentation and examples for modules
- Suggest testing strategies using terraform validate, plan, and external tools
- Recommend CI/CD pipeline configurations for Terraform automation
- Identify potential issues with drift detection and state management
- Optimize provider configurations for performance and cost

When to Use This Agent:
- Starting a new infrastructure project that needs Terraform setup
- Migrating existing infrastructure to infrastructure as code
- Troubleshooting "terraform plan" or "terraform apply" errors
- Needing to add new cloud resources with proper Terraform patterns
- Reviewing pull requests for Terraform changes
- Updating deprecated resources or provider versions
- Setting up CI/CD pipelines for Terraform automation
- Creating documentation for Terraform projects
- Implementing compliance and governance guardrails in code
- Converting between different IaC formats (CloudFormation, ARM, Pulumi)
- Performing cost optimization analysis on infrastructure
- Implementing disaster recovery and backup strategies

Working Style:
The agent will ask clarifying questions about cloud provider, environment requirements, existing state, and architectural constraints before writing code. It provides complete, working configurations with comments explaining key decisions, includes relevant outputs and variables, follows the DRY principle, suggests testing approaches, highlights security considerations, and recommends best practices for state management and team collaboration.

Step 3: Save and Verify

Save the agent configuration. You should now be able to invoke it:

claude code --agent terraform-iac-expert

Part 2: Setting Up MCP Servers for Terraform

MCP servers extend Claude's capabilities by connecting it to external tools, APIs, and data sources. For Terraform development, we'll set up access to your filesystem, GitHub repositories, and documentation.

Understanding MCP Architecture

┌─────────────────┐
│   Claude Code   │
│  (with Agent)   │
└────────┬────────┘
         │
         ├──────────┐
         │          │
    ┌────▼────┐ ┌──▼──────────┐
    │   MCP   │ │     MCP     │
    │  Server │ │   Server    │
    │  (FS)   │ │  (GitHub)   │
    └────┬────┘ └──┬──────────┘
         │         │
    ┌────▼────┐ ┌──▼──────────┐
    │  Local  │ │   GitHub    │
    │  Files  │ │    API      │
    └─────────┘ └─────────────┘

Step 1: Locate Your MCP Configuration File

For Claude Desktop (for testing):

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json
  • Linux: ~/.config/Claude/claude_desktop_config.json

For Claude Code: Check the official documentation at https://docs.claude.com for the latest configuration location, as Claude Code is evolving rapidly.

Step 2: Configure Essential MCP Servers

Create or edit your configuration file with the following setup:

{
  "mcpServers": {
    "terraform-workspace": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-filesystem",
        "/path/to/your/terraform/projects",
        "/path/to/your/infrastructure/repos"
      ]
    },
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "<YOUR_GITHUB_TOKEN>"
      }
    },
    "brave-search": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-brave-search"],
      "env": {
        "BRAVE_API_KEY": "<YOUR_BRAVE_API_KEY>"
      }
    }
  }
}

Step 3: Set Up GitHub Access (Recommended)

  1. Generate a GitHub Personal Access Token:

    • Go to GitHub Settings → Developer Settings → Personal Access Tokens → Tokens (classic)
    • Click "Generate new token (classic)"
    • Give it a descriptive name: "Claude Code Terraform MCP"
    • Select scopes: repo (Full control of private repositories)
    • Generate and copy the token
  2. Add to your configuration: Replace <YOUR_GITHUB_TOKEN> in the config above with your actual token

Step 4: Configure Filesystem Access

Update the paths in the terraform-workspace configuration to point to your actual Terraform project directories:

"terraform-workspace": {
  "command": "npx",
  "args": [
    "-y",
    "@modelcontextprotocol/server-filesystem",
    "/home/youruser/projects/terraform-infrastructure",
    "/home/youruser/projects/terraform-modules",
    "/home/youruser/company/infrastructure-as-code"
  ]
}

Security Note: Only grant access to directories you want Claude to read. The filesystem MCP server will only have access to explicitly listed paths.

Step 5: Optional - Set Up Brave Search for Documentation

To enable Claude to search Terraform documentation and provider registries:

  1. Get a Brave Search API key from https://brave.com/search/api/
  2. Add it to the configuration above

Step 6: Restart and Verify

  1. Save your configuration file
  2. Restart Claude Code/Desktop completely
  3. Verify MCP servers are loaded:
# In Claude Code, ask:
"What MCP servers are currently available?"

# You should see responses indicating:
# - filesystem (terraform-workspace)
# - github
# - brave-search (if configured)

Part 3: Advanced MCP Setup - Custom Terraform OSS Server

For teams using open-source Terraform, you can create a custom MCP server that provides direct access to Terraform CLI commands, state inspection, and configuration analysis.

Prerequisites

  • Terraform CLI installed and in PATH (terraform version should work)
  • Node.js development environment
  • Access to your Terraform project directories

Step 1: Create MCP Server Project

mkdir terraform-oss-mcp
cd terraform-oss-mcp
npm init -y
npm install @modelcontextprotocol/sdk

Step 2: Implement the MCP Server

Create index.js:

#!/usr/bin/env node
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import { CallToolRequestSchema, ListToolsRequestSchema } from '@modelcontextprotocol/sdk/types.js';
import { execSync, spawn } from 'child_process';
import { promisify } from 'util';
import { exec } from 'child_process';
import fs from 'fs';
import path from 'path';

const execAsync = promisify(exec);

class TerraformOSSServer {
  constructor() {
    this.workingDir = process.env.TERRAFORM_WORKING_DIR || process.cwd();
    
    this.server = new Server(
      {
        name: 'terraform-oss-mcp',
        version: '1.0.0',
      },
      {
        capabilities: {
          tools: {},
        },
      }
    );

    this.setupToolHandlers();
    this.server.onerror = (error) => console.error('[MCP Error]', error);
    process.on('SIGINT', async () => {
      await this.server.close();
      process.exit(0);
    });
  }

  setupToolHandlers() {
    this.server.setRequestHandler(ListToolsRequestSchema, async () => ({
      tools: [
        {
          name: 'terraform_init',
          description: 'Initialize a Terraform working directory. Downloads providers and modules.',
          inputSchema: {
            type: 'object',
            properties: {
              directory: {
                type: 'string',
                description: 'Directory to initialize (defaults to current working directory)',
              },
              upgrade: {
                type: 'boolean',
                description: 'Upgrade modules and plugins',
                default: false,
              },
            },
          },
        },
        {
          name: 'terraform_validate',
          description: 'Validate the Terraform configuration files in a directory',
          inputSchema: {
            type: 'object',
            properties: {
              directory: {
                type: 'string',
                description: 'Directory to validate',
              },
            },
          },
        },
        {
          name: 'terraform_plan',
          description: 'Generate and show an execution plan',
          inputSchema: {
            type: 'object',
            properties: {
              directory: {
                type: 'string',
                description: 'Directory containing Terraform files',
              },
              var_file: {
                type: 'string',
                description: 'Path to variable file (tfvars)',
              },
              out: {
                type: 'string',
                description: 'Path to save the plan file',
              },
            },
          },
        },
        {
          name: 'terraform_show',
          description: 'Show the current state or a saved plan',
          inputSchema: {
            type: 'object',
            properties: {
              directory: {
                type: 'string',
                description: 'Directory containing Terraform state',
              },
              plan_file: {
                type: 'string',
                description: 'Path to plan file to show',
              },
              json: {
                type: 'boolean',
                description: 'Output in JSON format',
                default: true,
              },
            },
          },
        },
        {
          name: 'terraform_state_list',
          description: 'List resources in the Terraform state',
          inputSchema: {
            type: 'object',
            properties: {
              directory: {
                type: 'string',
                description: 'Directory containing Terraform state',
              },
            },
          },
        },
        {
          name: 'terraform_state_show',
          description: 'Show detailed state of a specific resource',
          inputSchema: {
            type: 'object',
            properties: {
              directory: {
                type: 'string',
                description: 'Directory containing Terraform state',
              },
              resource: {
                type: 'string',
                description: 'Resource address (e.g., aws_instance.example)',
              },
            },
            required: ['resource'],
          },
        },
        {
          name: 'terraform_output',
          description: 'Read an output from the Terraform state',
          inputSchema: {
            type: 'object',
            properties: {
              directory: {
                type: 'string',
                description: 'Directory containing Terraform state',
              },
              name: {
                type: 'string',
                description: 'Name of the output to retrieve (optional, shows all if not specified)',
              },
              json: {
                type: 'boolean',
                description: 'Output in JSON format',
                default: true,
              },
            },
          },
        },
        {
          name: 'terraform_fmt',
          description: 'Format Terraform configuration files to canonical format',
          inputSchema: {
            type: 'object',
            properties: {
              directory: {
                type: 'string',
                description: 'Directory to format',
              },
              check: {
                type: 'boolean',
                description: 'Check if files are formatted without writing',
                default: false,
              },
              recursive: {
                type: 'boolean',
                description: 'Process files in subdirectories',
                default: true,
              },
            },
          },
        },
        {
          name: 'terraform_providers',
          description: 'Show the providers required for the configuration',
          inputSchema: {
            type: 'object',
            properties: {
              directory: {
                type: 'string',
                description: 'Directory containing Terraform files',
              },
            },
          },
        },
        {
          name: 'terraform_version',
          description: 'Show the Terraform version',
          inputSchema: {
            type: 'object',
            properties: {},
          },
        },
        {
          name: 'terraform_workspace_list',
          description: 'List all Terraform workspaces',
          inputSchema: {
            type: 'object',
            properties: {
              directory: {
                type: 'string',
                description: 'Directory containing Terraform configuration',
              },
            },
          },
        },
        {
          name: 'terraform_workspace_show',
          description: 'Show the current workspace name',
          inputSchema: {
            type: 'object',
            properties: {
              directory: {
                type: 'string',
                description: 'Directory containing Terraform configuration',
              },
            },
          },
        },
        {
          name: 'terraform_graph',
          description: 'Generate a visual representation of the dependency graph',
          inputSchema: {
            type: 'object',
            properties: {
              directory: {
                type: 'string',
                description: 'Directory containing Terraform files',
              },
              type: {
                type: 'string',
                description: 'Type of graph (plan, apply, etc.)',
                enum: ['plan', 'plan-destroy', 'apply', 'validate', 'refresh'],
              },
            },
          },
        },
        {
          name: 'read_tf_file',
          description: 'Read and parse a Terraform configuration file',
          inputSchema: {
            type: 'object',
            properties: {
              file_path: {
                type: 'string',
                description: 'Path to the .tf file',
              },
            },
            required: ['file_path'],
          },
        },
        {
          name: 'list_tf_files',
          description: 'List all Terraform files in a directory',
          inputSchema: {
            type: 'object',
            properties: {
              directory: {
                type: 'string',
                description: 'Directory to search',
              },
              recursive: {
                type: 'boolean',
                description: 'Search subdirectories',
                default: false,
              },
            },
          },
        },
        {
          name: 'parse_state_file',
          description: 'Read and parse the Terraform state file (terraform.tfstate)',
          inputSchema: {
            type: 'object',
            properties: {
              directory: {
                type: 'string',
                description: 'Directory containing the state file',
              },
            },
          },
        },
        {
          name: 'terraform_console',
          description: 'Evaluate expressions in the Terraform console (useful for testing functions)',
          inputSchema: {
            type: 'object',
            properties: {
              directory: {
                type: 'string',
                description: 'Directory containing Terraform configuration',
              },
              expression: {
                type: 'string',
                description: 'Terraform expression to evaluate',
              },
            },
            required: ['expression'],
          },
        },
      ],
    }));

    this.server.setRequestHandler(CallToolRequestSchema, async (request) => {
      const { name, arguments: args } = request.params;

      try {
        switch (name) {
          case 'terraform_init':
            return await this.terraformInit(args);
          case 'terraform_validate':
            return await this.terraformValidate(args);
          case 'terraform_plan':
            return await this.terraformPlan(args);
          case 'terraform_show':
            return await this.terraformShow(args);
          case 'terraform_state_list':
            return await this.terraformStateList(args);
          case 'terraform_state_show':
            return await this.terraformStateShow(args);
          case 'terraform_output':
            return await this.terraformOutput(args);
          case 'terraform_fmt':
            return await this.terraformFmt(args);
          case 'terraform_providers':
            return await this.terraformProviders(args);
          case 'terraform_version':
            return await this.terraformVersion();
          case 'terraform_workspace_list':
            return await this.terraformWorkspaceList(args);
          case 'terraform_workspace_show':
            return await this.terraformWorkspaceShow(args);
          case 'terraform_graph':
            return await this.terraformGraph(args);
          case 'read_tf_file':
            return await this.readTfFile(args);
          case 'list_tf_files':
            return await this.listTfFiles(args);
          case 'parse_state_file':
            return await this.parseStateFile(args);
          case 'terraform_console':
            return await this.terraformConsole(args);
          default:
            throw new Error(`Unknown tool: ${name}`);
        }
      } catch (error) {
        return {
          content: [
            {
              type: 'text',
              text: `Error: ${error.message}\n${error.stderr || ''}`,
            },
          ],
          isError: true,
        };
      }
    });
  }

  getDirectory(args) {
    return args?.directory || this.workingDir;
  }

  async execTerraform(args, directory) {
    const cmd = `terraform ${args.join(' ')}`;
    const { stdout, stderr } = await execAsync(cmd, { 
      cwd: directory,
      maxBuffer: 10 * 1024 * 1024, // 10MB buffer for large outputs
    });
    return { stdout, stderr };
  }

  async terraformInit(args) {
    const directory = this.getDirectory(args);
    const tfArgs = ['init', '-no-color'];
    
    if (args?.upgrade) {
      tfArgs.push('-upgrade');
    }
    
    const { stdout, stderr } = await this.execTerraform(tfArgs, directory);
    
    return {
      content: [
        {
          type: 'text',
          text: `Terraform Init Output:\n${stdout}\n${stderr}`,
        },
      ],
    };
  }

  async terraformValidate(args) {
    const directory = this.getDirectory(args);
    const { stdout, stderr } = await this.execTerraform(['validate', '-json', '-no-color'], directory);
    
    return {
      content: [
        {
          type: 'text',
          text: stdout,
        },
      ],
    };
  }

  async terraformPlan(args) {
    const directory = this.getDirectory(args);
    const tfArgs = ['plan', '-no-color', '-input=false'];
    
    if (args?.var_file) {
      tfArgs.push(`-var-file=${args.var_file}`);
    }
    
    if (args?.out) {
      tfArgs.push(`-out=${args.out}`);
    }
    
    const { stdout, stderr } = await this.execTerraform(tfArgs, directory);
    
    return {
      content: [
        {
          type: 'text',
          text: `Terraform Plan:\n${stdout}\n${stderr}`,
        },
      ],
    };
  }

  async terraformShow(args) {
    const directory = this.getDirectory(args);
    const tfArgs = ['show', '-no-color'];
    
    if (args?.json) {
      tfArgs.push('-json');
    }
    
    if (args?.plan_file) {
      tfArgs.push(args.plan_file);
    }
    
    const { stdout } = await this.execTerraform(tfArgs, directory);
    
    return {
      content: [
        {
          type: 'text',
          text: stdout,
        },
      ],
    };
  }

  async terraformStateList(args) {
    const directory = this.getDirectory(args);
    const { stdout } = await this.execTerraform(['state', 'list'], directory);
    
    return {
      content: [
        {
          type: 'text',
          text: `Resources in state:\n${stdout}`,
        },
      ],
    };
  }

  async terraformStateShow(args) {
    const directory = this.getDirectory(args);
    const { stdout } = await this.execTerraform(['state', 'show', args.resource], directory);
    
    return {
      content: [
        {
          type: 'text',
          text: stdout,
        },
      ],
    };
  }

  async terraformOutput(args) {
    const directory = this.getDirectory(args);
    const tfArgs = ['output', '-no-color'];
    
    if (args?.json) {
      tfArgs.push('-json');
    }
    
    if (args?.name) {
      tfArgs.push(args.name);
    }
    
    const { stdout } = await this.execTerraform(tfArgs, directory);
    
    return {
      content: [
        {
          type: 'text',
          text: stdout,
        },
      ],
    };
  }

  async terraformFmt(args) {
    const directory = this.getDirectory(args);
    const tfArgs = ['fmt', '-no-color'];
    
    if (args?.check) {
      tfArgs.push('-check');
    }
    
    if (args?.recursive) {
      tfArgs.push('-recursive');
    }
    
    const { stdout, stderr } = await this.execTerraform(tfArgs, directory);
    
    return {
      content: [
        {
          type: 'text',
          text: `Formatted files:\n${stdout}\n${stderr}`,
        },
      ],
    };
  }

  async terraformProviders(args) {
    const directory = this.getDirectory(args);
    const { stdout } = await this.execTerraform(['providers'], directory);
    
    return {
      content: [
        {
          type: 'text',
          text: stdout,
        },
      ],
    };
  }

  async terraformVersion() {
    const { stdout } = await this.execTerraform(['version'], process.cwd());
    
    return {
      content: [
        {
          type: 'text',
          text: stdout,
        },
      ],
    };
  }

  async terraformWorkspaceList(args) {
    const directory = this.getDirectory(args);
    const { stdout } = await this.execTerraform(['workspace', 'list'], directory);
    
    return {
      content: [
        {
          type: 'text',
          text: stdout,
        },
      ],
    };
  }

  async terraformWorkspaceShow(args) {
    const directory = this.getDirectory(args);
    const { stdout } = await this.execTerraform(['workspace', 'show'], directory);
    
    return {
      content: [
        {
          type: 'text',
          text: `Current workspace: ${stdout.trim()}`,
        },
      ],
    };
  }

  async terraformGraph(args) {
    const directory = this.getDirectory(args);
    const tfArgs = ['graph'];
    
    if (args?.type) {
      tfArgs.push(`-type=${args.type}`);
    }
    
    const { stdout } = await this.execTerraform(tfArgs, directory);
    
    return {
      content: [
        {
          type: 'text',
          text: `Dependency Graph (DOT format):\n${stdout}`,
        },
      ],
    };
  }

  async readTfFile(args) {
    const content = fs.readFileSync(args.file_path, 'utf-8');
    
    return {
      content: [
        {
          type: 'text',
          text: `Content of ${args.file_path}:\n\n${content}`,
        },
      ],
    };
  }

  async listTfFiles(args) {
    const directory = this.getDirectory(args);
    const recursive = args?.recursive || false;
    
    const findTfFiles = (dir, fileList = []) => {
      const files = fs.readdirSync(dir);
      
      files.forEach(file => {
        const filePath = path.join(dir, file);
        const stat = fs.statSync(filePath);
        
        if (stat.isDirectory() && recursive && !file.startsWith('.')) {
          findTfFiles(filePath, fileList);
        } else if (file.endsWith('.tf') || file.endsWith('.tfvars')) {
          fileList.push(filePath);
        }
      });
      
      return fileList;
    };
    
    const tfFiles = findTfFiles(directory);
    
    return {
      content: [
        {
          type: 'text',
          text: `Terraform files found:\n${tfFiles.join('\n')}`,
        },
      ],
    };
  }

  async parseStateFile(args) {
    const directory = this.getDirectory(args);
    const stateFile = path.join(directory, 'terraform.tfstate');
    
    if (!fs.existsSync(stateFile)) {
      throw new Error('terraform.tfstate file not found');
    }
    
    const content = fs.readFileSync(stateFile, 'utf-8');
    const state = JSON.parse(content);
    
    return {
      content: [
        {
          type: 'text',
          text: JSON.stringify(state, null, 2),
        },
      ],
    };
  }

  async terraformConsole(args) {
    const directory = this.getDirectory(args);
    const expression = args.expression;
    
    return new Promise((resolve, reject) => {
      const child = spawn('terraform', ['console'], {
        cwd: directory,
        stdio: ['pipe', 'pipe', 'pipe'],
      });
      
      let output = '';
      let errorOutput = '';
      
      child.stdout.on('data', (data) => {
        output += data.toString();
      });
      
      child.stderr.on('data', (data) => {
        errorOutput += data.toString();
      });
      
      child.on('close', (code) => {
        if (code !== 0) {
          reject(new Error(`Console exited with code ${code}: ${errorOutput}`));
        } else {
          resolve({
            content: [
              {
                type: 'text',
                text: `Expression: ${expression}\nResult: ${output.trim()}`,
              },
            ],
          });
        }
      });
      
      child.stdin.write(`${expression}\n`);
      child.stdin.end();
    });
  }

  async run() {
    const transport = new StdioServerTransport();
    await this.server.connect(transport);
    console.error('Terraform OSS MCP server running on stdio');
  }
}

const server = new TerraformOSSServer();
server.run().catch(console.error);

Step 3: Update package.json

{
  "name": "terraform-oss-mcp",
  "version": "1.0.0",
  "type": "module",
  "description": "MCP server for OpenSource Terraform CLI operations",
  "bin": {
    "terraform-oss-mcp": "./index.js"
  },
  "dependencies": {
    "@modelcontextprotocol/sdk": "^0.5.0"
  },
  "engines": {
    "node": ">=18.0.0"
  }
}

Step 4: Make it executable and install

chmod +x index.js

# Install dependencies
npm install

# Link globally for easy access
npm link

# Or install locally in node_modules
npm install

Step 5: Add to MCP Configuration

{
  "mcpServers": {
    "terraform-oss": {
      "command": "node",
      "args": ["/path/to/terraform-oss-mcp/index.js"],
      "env": {
        "TERRAFORM_WORKING_DIR": "/path/to/your/terraform/projects"
      }
    }
  }
}

Or if you installed globally with npm link:

{
  "mcpServers": {
    "terraform-oss": {
      "command": "terraform-oss-mcp",
      "env": {
        "TERRAFORM_WORKING_DIR": "/path/to/your/terraform/projects"
      }
    }
  }
}

Step 6: Test the MCP Server

Restart Claude Code/Desktop and try these commands:

# Check Terraform version
"What's my Terraform version?"

# List all resources in state
"Show me all resources in my Terraform state"

# Validate configuration
"Validate my Terraform configuration"

# Run a plan
"Generate a Terraform plan for my infrastructure"

# Check outputs
"What are the current Terraform outputs?"

# List workspaces
"Show me all Terraform workspaces"

# Format code
"Format all my Terraform files"

Part 4: Using Your Enhanced Terraform Workflow

Now that everything is configured, here's how to use your new superpowers:

Starting a Session

# Start Claude Code with the Terraform agent
claude code --agent terraform-iac-expert

# Or select the agent from the UI

Example Workflows

1. Creating New Infrastructure

You: "Create a production-ready AWS VPC with public and private subnets across 3 availability zones, with NAT gateways and proper tagging"

Claude will:
- Ask clarifying questions (region, CIDR blocks, naming conventions)
- Generate complete Terraform configuration
- Include variables.tf, outputs.tf, and main.tf
- Add security best practices
- Provide usage examples

2. Reviewing Existing Code

You: "Review the VPC configuration in my infrastructure repo and suggest improvements"

Claude will:
- Use GitHub MCP to read your repository
- Analyze the Terraform code
- Identify security issues, cost optimizations
- Suggest refactoring opportunities
- Provide specific code improvements

3. Debugging Terraform Errors

You: "I'm getting an error when running terraform plan: [paste error]"

Claude will:
- Analyze the error message
- Read relevant .tf files from your workspace
- Identify the root cause
- Suggest specific fixes
- Explain why the error occurred

4. Module Development

You: "Create a reusable module for RDS PostgreSQL instances with automated backups, encryption, and monitoring"

Claude will:
- Generate complete module structure
- Include comprehensive variable validation
- Add detailed documentation
- Provide usage examples
- Include outputs for common use cases

5. Infrastructure Migration

You: "Convert this CloudFormation template to Terraform"

[Paste CloudFormation YAML]

Claude will:
- Parse the CloudFormation template
- Generate equivalent Terraform HCL
- Add improvements and best practices
- Note any differences or caveats

6. State Management

You: "How should I structure my Terraform state for a multi-environment, multi-region deployment?"

Claude will:
- Recommend workspace strategy or directory structure
- Suggest backend configuration
- Provide state isolation patterns
- Include example configurations

Part 5: Team Rollout Guide

For Team Leads

1. Create Standardized Configuration

Create a repository with:

terraform-claude-setup/
├── README.md
├── mcp-config-template.json
├── agent-description.md
└── setup-script.sh

2. Setup Script Example

#!/bin/bash
# setup-terraform-claude.sh

echo "Setting up Terraform Claude Code environment..."

# Backup existing config
if [ -f "$HOME/Library/Application Support/Claude/claude_desktop_config.json" ]; then
    cp "$HOME/Library/Application Support/Claude/claude_desktop_config.json" \
       "$HOME/Library/Application Support/Claude/claude_desktop_config.json.backup"
fi

# Copy template
cp mcp-config-template.json "$HOME/Library/Application Support/Claude/claude_desktop_config.json"

# Prompt for tokens
read -p "Enter your GitHub Personal Access Token: " GITHUB_TOKEN
read -p "Enter your Terraform Cloud organization: " TF_ORG
read -sp "Enter your Terraform Cloud token: " TF_TOKEN
echo

# Update config with tokens
sed -i '' "s/<YOUR_GITHUB_TOKEN>/$GITHUB_TOKEN/g" \
    "$HOME/Library/Application Support/Claude/claude_desktop_config.json"
sed -i '' "s/<YOUR_TF_CLOUD_TOKEN>/$TF_TOKEN/g" \
    "$HOME/Library/Application Support/Claude/claude_desktop_config.json"
sed -i '' "s/your-org-name/$TF_ORG/g" \
    "$HOME/Library/Application Support/Claude/claude_desktop_config.json"

echo "Setup complete! Please restart Claude Code."

3. Documentation for Developers

Create internal documentation covering:

  • Why this improves workflow
  • Installation steps
  • Common usage patterns
  • Security considerations
  • Troubleshooting guide

For Individual Developers

Quick Start Checklist:

  • [ ] Install Claude Code
  • [ ] Install Node.js 18+
  • [ ] Create Terraform agent with provided description
  • [ ] Configure MCP servers (at minimum: filesystem + GitHub)
  • [ ] Generate GitHub Personal Access Token
  • [ ] Update configuration file
  • [ ] Restart Claude Code
  • [ ] Test with simple query: "List available MCP servers"
  • [ ] Try example workflow: "Review my latest Terraform changes"

Part 6: Best Practices and Tips

Security Considerations

  1. Token Management

    • Never commit tokens to version control
    • Use environment variables or secure credential storage
    • Rotate tokens regularly
    • Use least-privilege access
  2. Filesystem Access

    • Only grant access to necessary directories
    • Avoid granting access to entire home directory
    • Review MCP server logs periodically
  3. Code Review

    • Always review Claude-generated code before applying
    • Test in non-production environments first
    • Use terraform plan to preview changes
    • Follow your organization's approval processes

Performance Tips

  1. MCP Server Optimization

    • Limit filesystem paths to relevant projects
    • Use specific GitHub repo queries instead of org-wide searches
    • Cache frequently accessed data when building custom MCPs
  2. Agent Usage

    • Be specific in your requests
    • Provide context (cloud provider, environment, constraints)
    • Reference existing code when asking for modifications

Workflow Integration

  1. CI/CD Integration

    # Use Claude to generate pipeline configs
    "Generate a GitHub Actions workflow for Terraform with plan on PR and apply on merge"
    
  2. Documentation

    # Auto-generate module documentation
    "Create comprehensive README.md for this Terraform module with usage examples"
    
  3. Code Reviews

    # Before committing
    "Review these Terraform changes for security issues and best practices"
    

Part 7: Troubleshooting

Common Issues

Issue: MCP servers not appearing

# Check Node.js version
node --version  # Should be 18+

# Check config file syntax
cat ~/Library/Application\ Support/Claude/claude_desktop_config.json | jq .

# Check logs (location varies by platform)
tail -f ~/.claude/logs/mcp.log

Issue: GitHub MCP authentication fails

# Test token manually
curl -H "Authorization: token YOUR_TOKEN" https://api.github.com/user

# Verify token scopes include 'repo'
# Regenerate if necessary

Issue: Agent not behaving as expected

  • Review agent description for clarity
  • Be more specific in your requests
  • Provide examples of desired output
  • Check if you're using the correct agent

Issue: Filesystem access denied

  • Verify paths exist and are readable
  • Check file permissions
  • Ensure paths are absolute, not relative
  • Review OS security settings (macOS Gatekeeper, etc.)

Getting Help

  1. Check official documentation: https://docs.claude.com
  2. Community forums: Look for Claude Code user communities
  3. GitHub Issues: Check MCP server repositories
  4. Internal support: Contact your team lead or DevOps

Conclusion

By combining Claude Code's Terraform agent with MCP servers, you've created a powerful AI-assisted infrastructure development environment. This setup will:

  • Reduce development time by 40-60% for common Terraform tasks
  • Improve code quality through automated best practice recommendations
  • Decrease debugging time with context-aware error analysis
  • Accelerate onboarding for new team members
  • Ensure consistency across infrastructure codebases

Next Steps

  1. Start small: Begin with the agent only, add MCP servers gradually
  2. Share learnings: Document patterns that work well for your team
  3. Iterate: Refine agent descriptions based on actual usage
  4. Expand: Consider additional MCP servers for other tools (Slack, Jira, monitoring)
  5. Contribute: Share custom MCP servers with the community

Additional Resources


Questions or feedback? Share your experiences and help improve this guide for the community.

Happy Infrastructure Coding! 🚀

AVD in a minute



 <#

.SYNOPSIS

    Complete Azure Virtual Desktop (AVD) Setup Script


.DESCRIPTION

    This script sets up Azure Virtual Desktop for an existing Windows VM:

    - Creates Host Pool (Personal type)

    - Creates Desktop Application Group

    - Creates Workspace

    - Azure AD joins the VM

    - Installs AVD agents

    - Registers session host

    - Assigns user to session host


.PARAMETER ResourceGroupName

    The resource group containing the VM


.PARAMETER VMName

    The name of the existing Windows VM


.PARAMETER Location

    Azure region (default: eastus)


.PARAMETER HostPoolName

    Name for the AVD Host Pool


.PARAMETER AppGroupName

    Name for the Application Group


.PARAMETER WorkspaceName

    Name for the AVD Workspace


.PARAMETER AssignedUser

    User Principal Name to assign to the session host


.EXAMPLE

    .\Setup-AVD-Complete.ps1 -ResourceGroupName "RG-WIN11-BASTION" -VMName "VM-Win11" -AssignedUser "user@domain.com"


.NOTES

    Author: Generated by Claude Code

    Date: 2026-01-03

    Requires: Azure CLI, Owner/Contributor role on subscription

#>


param(

    [Parameter(Mandatory=$true)]

    [string]$ResourceGroupName,


    [Parameter(Mandatory=$true)]

    [string]$VMName,


    [Parameter(Mandatory=$false)]

    [string]$Location = "eastus",


    [Parameter(Mandatory=$false)]

    [string]$HostPoolName = "HP-AVD-Personal",


    [Parameter(Mandatory=$false)]

    [string]$AppGroupName = "AG-AVD-Desktop",


    [Parameter(Mandatory=$false)]

    [string]$WorkspaceName = "WS-AVD",


    [Parameter(Mandatory=$false)]

    [string]$AssignedUser

)


$ErrorActionPreference = "Stop"


Write-Host "============================================" -ForegroundColor Cyan

Write-Host "  Azure Virtual Desktop Setup Script" -ForegroundColor Cyan

Write-Host "============================================" -ForegroundColor Cyan

Write-Host ""


# Get subscription ID

Write-Host "[1/12] Getting subscription information..." -ForegroundColor Yellow

$subscriptionInfo = az account show --query "{id:id, name:name}" -o json | ConvertFrom-Json

$SubscriptionId = $subscriptionInfo.id

Write-Host "  Subscription: $($subscriptionInfo.name)" -ForegroundColor Green

Write-Host "  ID: $SubscriptionId" -ForegroundColor Green


# Verify VM exists

Write-Host ""

Write-Host "[2/12] Verifying VM exists..." -ForegroundColor Yellow

$vmInfo = az vm show --resource-group $ResourceGroupName --name $VMName --query "{name:name, location:location, vmId:vmId}" -o json 2>$null | ConvertFrom-Json

if (-not $vmInfo) {

    Write-Host "  ERROR: VM '$VMName' not found in resource group '$ResourceGroupName'" -ForegroundColor Red

    exit 1

}

Write-Host "  VM Found: $($vmInfo.name)" -ForegroundColor Green

Write-Host "  Location: $($vmInfo.location)" -ForegroundColor Green


# Use VM's location if not specified

if (-not $Location) {

    $Location = $vmInfo.location

}


# Create Host Pool

Write-Host ""

Write-Host "[3/12] Creating Host Pool: $HostPoolName..." -ForegroundColor Yellow

az desktopvirtualization hostpool create `

    --resource-group $ResourceGroupName `

    --name $HostPoolName `

    --location $Location `

    --host-pool-type "Personal" `

    --personal-desktop-assignment-type "Automatic" `

    --load-balancer-type "Persistent" `

    --preferred-app-group-type "Desktop" `

    --registration-info expiration-time="$((Get-Date).AddDays(1).ToString('yyyy-MM-ddTHH:mm:ss.fffffffZ'))" registration-token-operation="Update" `

    --output none


Write-Host "  Host Pool created successfully" -ForegroundColor Green


# Create Application Group

Write-Host ""

Write-Host "[4/12] Creating Application Group: $AppGroupName..." -ForegroundColor Yellow

$hostPoolArmPath = "/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.DesktopVirtualization/hostPools/$HostPoolName"


az desktopvirtualization applicationgroup create `

    --resource-group $ResourceGroupName `

    --name $AppGroupName `

    --location $Location `

    --application-group-type "Desktop" `

    --host-pool-arm-path $hostPoolArmPath `

    --output none


Write-Host "  Application Group created successfully" -ForegroundColor Green


# Create Workspace

Write-Host ""

Write-Host "[5/12] Creating Workspace: $WorkspaceName..." -ForegroundColor Yellow

az desktopvirtualization workspace create `

    --resource-group $ResourceGroupName `

    --name $WorkspaceName `

    --location $Location `

    --output none


Write-Host "  Workspace created successfully" -ForegroundColor Green


# Associate Application Group with Workspace

Write-Host ""

Write-Host "[6/12] Associating Application Group with Workspace..." -ForegroundColor Yellow

$appGroupArmPath = "/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.DesktopVirtualization/applicationGroups/$AppGroupName"


az desktopvirtualization workspace update `

    --resource-group $ResourceGroupName `

    --name $WorkspaceName `

    --application-group-references $appGroupArmPath `

    --output none


Write-Host "  Association completed successfully" -ForegroundColor Green


# Generate Registration Token

Write-Host ""

Write-Host "[7/12] Generating registration token..." -ForegroundColor Yellow

$expirationTime = (Get-Date).AddHours(24).ToUniversalTime().ToString('yyyy-MM-ddTHH:mm:ss.fffffffZ')


$tokenResult = az rest --method POST `

    --uri "https://management.azure.com/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.DesktopVirtualization/hostPools/$HostPoolName/retrieveRegistrationToken?api-version=2023-09-05" `

    --body "{}" `

    -o json | ConvertFrom-Json


$RegistrationToken = $tokenResult.token

Write-Host "  Registration token generated (expires in 24 hours)" -ForegroundColor Green


# Enable System Assigned Managed Identity

Write-Host ""

Write-Host "[8/12] Enabling managed identity on VM..." -ForegroundColor Yellow

az vm identity assign `

    --resource-group $ResourceGroupName `

    --name $VMName `

    --output none


Write-Host "  Managed identity enabled" -ForegroundColor Green


# Install AADLoginForWindows extension (Azure AD Join)

Write-Host ""

Write-Host "[9/12] Installing Azure AD Login extension (Azure AD Join)..." -ForegroundColor Yellow

az vm extension set `

    --resource-group $ResourceGroupName `

    --vm-name $VMName `

    --name "AADLoginForWindows" `

    --publisher "Microsoft.Azure.ActiveDirectory" `

    --output none


Write-Host "  Azure AD Login extension installed" -ForegroundColor Green


# Install AVD Agents and set registration token via Run Command

Write-Host ""

Write-Host "[10/12] Installing AVD agents and configuring registration..." -ForegroundColor Yellow


$avdInstallScript = @"

`$ErrorActionPreference = 'Stop'

`$tempDir = 'C:\AVDTemp'


# Create temp directory

if (!(Test-Path `$tempDir)) {

    New-Item -ItemType Directory -Path `$tempDir -Force | Out-Null

}


# Download AVD Agent

Write-Host 'Downloading AVD Agent...'

`$agentUrl = 'https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWrmXv'

`$agentPath = Join-Path `$tempDir 'Microsoft.RDInfra.RDAgent.Installer-x64.msi'

Invoke-WebRequest -Uri `$agentUrl -OutFile `$agentPath -UseBasicParsing


# Download Boot Loader

Write-Host 'Downloading Boot Loader...'

`$bootLoaderUrl = 'https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWrxrH'

`$bootLoaderPath = Join-Path `$tempDir 'Microsoft.RDInfra.RDAgentBootLoader.Installer-x64.msi'

Invoke-WebRequest -Uri `$bootLoaderUrl -OutFile `$bootLoaderPath -UseBasicParsing


# Install Boot Loader first

Write-Host 'Installing Boot Loader...'

Start-Process msiexec.exe -ArgumentList "/i `$bootLoaderPath /quiet /norestart" -Wait -NoNewWindow


# Install AVD Agent with registration token

Write-Host 'Installing AVD Agent with registration token...'

Start-Process msiexec.exe -ArgumentList "/i `$agentPath /quiet /norestart REGISTRATIONTOKEN=$RegistrationToken" -Wait -NoNewWindow


Write-Host 'AVD Agent installation completed'

"@


az vm run-command invoke `

    --resource-group $ResourceGroupName `

    --name $VMName `

    --command-id RunPowerShellScript `

    --scripts $avdInstallScript `

    --output none


Write-Host "  AVD agents installed" -ForegroundColor Green


# Set registration token in registry and restart services

Write-Host ""

Write-Host "[11/12] Configuring registry and restarting services..." -ForegroundColor Yellow


$registryScript = @"

`$token = '$RegistrationToken'

`$regPath = 'HKLM:\SOFTWARE\Microsoft\RDInfraAgent'


if (!(Test-Path `$regPath)) {

    New-Item -Path `$regPath -Force | Out-Null

}


Set-ItemProperty -Path `$regPath -Name 'RegistrationToken' -Value `$token -Force

Set-ItemProperty -Path `$regPath -Name 'IsRegistered' -Value 0 -Type DWord -Force


Stop-Service RDAgentBootLoader -Force -ErrorAction SilentlyContinue

Stop-Service RDAgent -Force -ErrorAction SilentlyContinue

Start-Sleep -Seconds 5

Start-Service RDAgentBootLoader

Start-Service RDAgent

Start-Sleep -Seconds 15


`$regValue = Get-ItemProperty -Path `$regPath

Write-Host "IsRegistered: `$(`$regValue.IsRegistered)"

"@


$result = az vm run-command invoke `

    --resource-group $ResourceGroupName `

    --name $VMName `

    --command-id RunPowerShellScript `

    --scripts $registryScript `

    --query "value[0].message" `

    -o tsv


Write-Host "  Registry configured, services restarted" -ForegroundColor Green

Write-Host "  $result" -ForegroundColor Gray


# Assign user to session host (if specified)

Write-Host ""

Write-Host "[12/12] Assigning user to session host..." -ForegroundColor Yellow


if ($AssignedUser) {

    # Wait for session host to register

    Start-Sleep -Seconds 10


    az rest --method PATCH `

        --uri "https://management.azure.com/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.DesktopVirtualization/hostPools/$HostPoolName/sessionHosts/$VMName`?api-version=2023-09-05" `

        --body "{`"properties`":{`"assignedUser`":`"$AssignedUser`"}}" `

        --output none


    Write-Host "  User '$AssignedUser' assigned to session host" -ForegroundColor Green

} else {

    Write-Host "  No user specified, skipping assignment" -ForegroundColor Yellow

}


# Verify session host registration

Write-Host ""

Write-Host "============================================" -ForegroundColor Cyan

Write-Host "  Verifying Setup" -ForegroundColor Cyan

Write-Host "============================================" -ForegroundColor Cyan


Start-Sleep -Seconds 5


$sessionHost = az rest --method GET `

    --uri "https://management.azure.com/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.DesktopVirtualization/hostPools/$HostPoolName/sessionHosts?api-version=2023-09-05" `

    -o json | ConvertFrom-Json


if ($sessionHost.value.Count -gt 0) {

    $host = $sessionHost.value[0]

    Write-Host ""

    Write-Host "Session Host Status:" -ForegroundColor Green

    Write-Host "  Name: $($host.name.Split('/')[1])" -ForegroundColor White

    Write-Host "  Status: $($host.properties.status)" -ForegroundColor White

    Write-Host "  Agent Version: $($host.properties.agentVersion)" -ForegroundColor White

    Write-Host "  Assigned User: $($host.properties.assignedUser)" -ForegroundColor White

    Write-Host "  Allow New Session: $($host.properties.allowNewSession)" -ForegroundColor White

} else {

    Write-Host "  Session host not yet registered. Please wait a few minutes." -ForegroundColor Yellow

}


# Summary

Write-Host ""

Write-Host "============================================" -ForegroundColor Cyan

Write-Host "  Setup Complete!" -ForegroundColor Cyan

Write-Host "============================================" -ForegroundColor Cyan

Write-Host ""

Write-Host "Resources Created:" -ForegroundColor Green

Write-Host "  - Host Pool: $HostPoolName" -ForegroundColor White

Write-Host "  - Application Group: $AppGroupName" -ForegroundColor White

Write-Host "  - Workspace: $WorkspaceName" -ForegroundColor White

Write-Host "  - Session Host: $VMName" -ForegroundColor White

Write-Host ""

Write-Host "IMPORTANT - Manual Step Required:" -ForegroundColor Yellow

Write-Host "  Assign 'Desktop Virtualization User' role to user(s) on the Application Group:" -ForegroundColor Yellow

Write-Host ""

Write-Host "  az role assignment create \" -ForegroundColor Gray

Write-Host "    --assignee `"<user-object-id>`" \" -ForegroundColor Gray

Write-Host "    --role `"Desktop Virtualization User`" \" -ForegroundColor Gray

Write-Host "    --scope `"$appGroupArmPath`"" -ForegroundColor Gray

Write-Host ""

Write-Host "Or via Azure Portal:" -ForegroundColor Yellow

Write-Host "  1. Go to Application Group '$AppGroupName'" -ForegroundColor White

Write-Host "  2. Access control (IAM) -> Add role assignment" -ForegroundColor White

Write-Host "  3. Select 'Desktop Virtualization User' role" -ForegroundColor White

Write-Host "  4. Add user(s) and save" -ForegroundColor White

Write-Host ""

Write-Host "To Connect:" -ForegroundColor Green

Write-Host "  1. Download AVD Client: https://aka.ms/avdclient" -ForegroundColor White

Write-Host "  2. Open client and click 'Subscribe'" -ForegroundColor White

Write-Host "  3. Sign in with your Azure AD account" -ForegroundColor White

Write-Host "  4. Double-click the desktop to connect" -ForegroundColor White

Write-Host ""


PostgreSQL Azure Container Instance - Deployment Guide

 # PostgreSQL Azure Container Instance - Deployment Guide


This guide explains how to deploy PostgreSQL to Azure Container Instance using the automated deployment scripts.


## 📋 Prerequisites


Before running the deployment script, ensure you have:


1. **Docker Desktop** installed and running

   - Download: https://www.docker.com/products/docker-desktop


2. **Azure CLI** installed

   - Windows: https://aka.ms/installazurecliwindows

   - Mac: `brew install azure-cli`

   - Linux: https://docs.microsoft.com/cli/azure/install-azure-cli-linux


3. **Azure Account** with active subscription

   - Sign up: https://azure.microsoft.com/free/


4. **Logged in to Azure**

   ```bash

   az login

   ```


5. **PostgreSQL Client Tools** (optional, for testing)

   - Already installed at: `C:\Program Files\PostgreSQL\18\bin\psql.exe`


---


## 🚀 Quick Start


### Option 1: Using Bash Script (Git Bash on Windows, or Linux/Mac)


```bash

# Navigate to the project directory

cd C:/Users/kusha/postgresql-mcp


# Make script executable (Linux/Mac only)

chmod +x deploy-to-azure.sh


# Run the deployment

./deploy-to-azure.sh

```


### Option 2: Using PowerShell Script (Windows)


```powershell

# Navigate to the project directory

cd C:\Users\kusha\postgresql-mcp


# Allow script execution (first time only)

Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser


# Run the deployment

.\deploy-to-azure.ps1

```


---


## 📝 What the Script Does


The automated deployment script performs the following steps:


### 1. Prerequisites Check

- ✓ Verifies Docker is installed

- ✓ Verifies Azure CLI is installed

- ✓ Checks Azure login status

- ✓ Confirms Dockerfile exists


### 2. Resource Group Creation

- Creates Azure Resource Group: `rg-postgres-mcp`

- Location: `eastus` (configurable in script)


### 3. Azure Container Registry (ACR) Setup

- Creates ACR with unique name: `acrpostgresmcp<timestamp>`

- Enables admin access

- Logs in to the registry


### 4. Docker Image Build & Push

- Builds PostgreSQL 17 Docker image from Dockerfile

- Tags image for ACR

- Pushes image to Azure Container Registry


### 5. Azure Container Instance (ACI) Deployment

- Creates container instance: `aci-postgres-library`

- Configures with:

  - 1 CPU core

  - 1.5 GB memory

  - Public IP address

  - Unique DNS name

  - Environment variables for PostgreSQL


### 6. Connection Information

- Retrieves public IP address

- Displays connection string

- Generates MCP configuration

- Saves all info to `deployment-info.txt`


### 7. Connection Test

- Waits for PostgreSQL to start (30 seconds)

- Tests connection with `psql` (if available)


---


## 🔧 Configuration


You can customize the deployment by editing variables at the top of the script:


### Bash Script (`deploy-to-azure.sh`)

```bash

RESOURCE_GROUP="rg-postgres-mcp"

LOCATION="eastus"

ACR_NAME="acrpostgresmcp$(date +%s)"

IMAGE_NAME="postgres-library"

CONTAINER_NAME="aci-postgres-library"


# Database Configuration

DB_NAME="librarydatabase"

DB_USER="rakuser"

DB_PASSWORD="rakpassword"


# Container Resources

CPU_CORES="1"

MEMORY_GB="1.5"

```


### PowerShell Script (`deploy-to-azure.ps1`)

```powershell

$RESOURCE_GROUP = "rg-postgres-mcp"

$LOCATION = "eastus"

$DB_NAME = "librarydatabase"

$DB_USER = "rakuser"

$DB_PASSWORD = "rakpassword"

```


---


## 📊 Expected Output


The script will display:


```

╔═══════════════════════════════════════════════════════════╗

║   PostgreSQL Azure Container Instance Deployment         ║

║   Automated Deployment Script                            ║

╚═══════════════════════════════════════════════════════════╝


========================================

Checking Prerequisites

========================================

✓ Docker is installed: Docker version 24.0.6

✓ Azure CLI is installed: azure-cli 2.54.0

✓ Logged in to Azure

✓ Dockerfile found


========================================

Creating Resource Group

========================================

→ Creating resource group 'rg-postgres-mcp' in 'eastus'...

✓ Resource group created


========================================

Creating Azure Container Registry

========================================

→ Creating Azure Container Registry 'acrpostgresmcp1702123456'...

✓ ACR created

→ Logging in to ACR...

✓ Logged in to ACR


========================================

Building and Pushing Docker Image

========================================

→ Building Docker image...

✓ Docker image built

→ Tagging image for ACR...

✓ Image tagged

→ Pushing image to ACR (this may take a few minutes)...

✓ Image pushed to ACR


========================================

Deploying Azure Container Instance

========================================

→ Creating container instance (this may take 2-3 minutes)...

✓ Container instance created


========================================

Connection Information

========================================


Deployment Complete!


Connection Details:

  Public IP:    20.232.77.76

  FQDN:         postgres-mcp-db-1702123456.eastus.azurecontainer.io

  Port:         5432

  Database:     librarydatabase

  Username:     rakuser

  Password:     rakpassword


Connection String:

postgresql://rakuser:rakpassword@20.232.77.76:5432/librarydatabase


✓ Connection information saved to 'deployment-info.txt'


✓ Deployment completed successfully!

```


---


## 🔌 Update MCP Configuration


After deployment, update your `.mcp.json` file with the new connection string:


```json

{

  "mcpServers": {

    "postgres-enterprise": {

      "command": "C:\\Users\\kusha\\postgresql-mcp\\.venv\\Scripts\\python.exe",

      "args": ["C:\\Users\\kusha\\postgresql-mcp\\mcp_server_enterprise.py"],

      "env": {

        "DATABASE_URL": "postgresql://rakuser:XXXXXX@20.XXX.7X.XX:5432/librarydatabase"

      }

    }

  }

}

```


**Location:** `C:\Users\kusha\.mcp.json`


---


## 🧪 Testing the Deployment


### Test 1: Direct PostgreSQL Connection


```bash

# Using psql

psql "postgresql://rakuser:XXXXXX@20.XXX.7X.XX:5432/librarydatabase"


# Or with environment variable

$env:PGPASSWORD="XXXXXX"

psql -h XX.23X.7XX.76 -U rakuser -d librarydatabase

```


### Test 2: Python Connection Test


```bash

cd C:\Users\kusha\postgresql-mcp

python test_azure_connection.py

```


### Test 3: MCP Server Test


```bash

# Restart Claude Code to load new configuration

# Then in Claude Code:

# Ask: "Show me all tables in my database"

```


---


## 🗑️ Cleanup / Delete Resources


To delete all Azure resources created by the script:


```bash

# Delete entire resource group

az group delete --name rg-postgres-mcp --yes --no-wait


# Or individually:

az container delete --resource-group rg-postgres-mcp --name aci-postgres-library --yes

az acr delete --resource-group rg-postgres-mcp --name acrpostgresmcp1702123456 --yes

az group delete --name rg-postgres-mcp --yes

```


---


## ⚠️ Troubleshooting


### Issue 1: "Docker command not found"

**Solution:**

- Install Docker Desktop: https://www.docker.com/products/docker-desktop

- Ensure Docker is running (check system tray)


### Issue 2: "az command not found"

**Solution:**

- Install Azure CLI: https://docs.microsoft.com/cli/azure/install-azure-cli

- Restart terminal after installation


### Issue 3: "Not logged in to Azure"

**Solution:**

```bash

az login

# Follow browser authentication

```


### Issue 4: "ACR name already exists"

**Solution:**

- The script uses timestamps to create unique names

- If issue persists, manually change `ACR_NAME` in the script


### Issue 5: "Connection test failed"

**Possible causes:**

- PostgreSQL container is still starting (wait 1-2 minutes)

- Firewall blocking port 5432

- Check container logs:

  ```bash

  az container logs --resource-group rg-postgres-mcp --name aci-postgres-library

  ```


### Issue 6: "Container creation failed"

**Solution:**

```bash

# Check container status

az container show --resource-group rg-postgres-mcp --name aci-postgres-library


# View container logs

az container logs --resource-group rg-postgres-mcp --name aci-postgres-library


# Delete and retry

az container delete --resource-group rg-postgres-mcp --name aci-postgres-library --yes

# Re-run deployment script

```


---


## 💰 Azure Costs


**Estimated Monthly Cost (as of 2024):**


- **Azure Container Instance:**

  - 1 vCPU: ~$30/month

  - 1.5 GB Memory: ~$5/month

  - **Total ACI:** ~$35/month


- **Azure Container Registry (Basic):**

  - Storage: ~$5/month

  - **Total ACR:** ~$5/month


**Total Estimated Cost: ~$40/month**


**Cost Optimization Tips:**

1. Stop container when not in use:

   ```bash

   az container stop --resource-group rg-postgres-mcp --name aci-postgres-library

   ```

2. Use Azure Free Tier credits (first 30 days)

3. Delete resources when not needed


---


## 🔒 Security Recommendations


1. **Change Default Password**

   - Edit `DB_PASSWORD` in script before deployment

   - Use strong passwords (16+ characters, mixed case, numbers, symbols)


2. **Restrict Network Access**

   - Add firewall rules to limit IP addresses

   - Use Azure Virtual Network for private access


3. **Enable SSL/TLS**

   - PostgreSQL in container should enforce SSL

   - Add `?sslmode=require` to connection string


4. **Use Azure Key Vault**

   - Store DATABASE_URL in Key Vault

   - Reference in MCP configuration


5. **Regular Backups**

   - Use MCP server's backup tools

   - Schedule automated backups


6. **Monitor Access**

   - Enable Azure Monitor

   - Set up alerts for suspicious activity


---


## 📚 Additional Resources


- **Azure Container Instances:** https://docs.microsoft.com/azure/container-instances/

- **Azure Container Registry:** https://docs.microsoft.com/azure/container-registry/

- **PostgreSQL Docker Image:** https://hub.docker.com/_/postgres

- **MCP Server Documentation:** `MCP_README.md`

- **Setup Guide:** `SETUP_GUIDE.md`


---


## 🆘 Support


If you encounter issues:


1. Check `deployment-info.txt` for connection details

2. Review container logs: `az container logs --resource-group rg-postgres-mcp --name aci-postgres-library`

3. Test connection with `psql` directly

4. Verify MCP server configuration in `.mcp.json`

5. Check Claude Code logs: `~/.claude/debug/`


---


## ✅ Deployment Checklist


- [ ] Docker Desktop installed and running

- [ ] Azure CLI installed

- [ ] Logged in to Azure (`az login`)

- [ ] Dockerfile exists in project directory

- [ ] Customized configuration variables (password, etc.)

- [ ] Run deployment script

- [ ] Wait for completion (5-10 minutes)

- [ ] Save `deployment-info.txt`

- [ ] Update `.mcp.json` with new connection string

- [ ] Restart Claude Code

- [ ] Test database connection

- [ ] Test MCP server functionality


---


**Deployment Date:** $(Get-Date)

**Script Version:** 1.0

**Last Updated:** December 2024


Supercharge Your SQL Server with AI: A Complete Guide to the VECTOR Data Type in SQL Server 2025

Supercharge Your SQL Server with AI: A Complete Guide to the VECTOR Data Type in SQL Server 2025

How to build intelligent search, recommendations, and semantic matching directly in your database

Introduction

SQL Server 2025 has arrived with a game-changing feature that bridges traditional databases with modern AI capabilities: the VECTOR data type. If you've been wondering how to implement semantic search, build recommendation engines, or create intelligent customer segmentation without complex external systems, this guide is for you.

In this hands-on tutorial, we'll explore six real-world use cases using the VECTOR data type, complete with working code examples you can implement today.

What is the VECTOR Data Type?

The VECTOR data type allows you to store high-dimensional numerical arrays (embeddings) directly in SQL Server. These embeddings represent the "meaning" of text, images, or other data in a format that AI models can understand and compare.

Think of it this way: Instead of searching for exact text matches, you can now search for similar meanings. A user searching for "I forgot my password" will find the FAQ entry "How do I reset my password?" even though the words are completely different.

Why This Matters

Traditional SQL excels at exact matches, ranges, and structured queries. But modern applications need:

  • Semantic search: Understanding intent, not just keywords
  • Recommendations: "Customers who bought this also bought..."
  • Content discovery: "You might also like..."
  • Intelligent segmentation: Grouping by behavior patterns, not just demographics

The VECTOR data type brings these AI capabilities directly into your database, eliminating the need for separate vector databases or complex integrations.

Use Case #1: E-Commerce Product Recommendations

Let's start with every retailer's dream: showing customers relevant products they're likely to buy.

-- Create products table with vector embeddings
CREATE TABLE Products (
    ProductId INT PRIMARY KEY IDENTITY(1,1),
    ProductName NVARCHAR(200) NOT NULL,
    Category NVARCHAR(100),
    Price DECIMAL(10,2),
    Description NVARCHAR(MAX),
    EmbeddingVector VECTOR(5) NOT NULL  -- Real world: use 768-1536 dimensions
);

-- Insert sample products
INSERT INTO Products (ProductName, Category, Price, Description, EmbeddingVector)
VALUES
    ('Laptop Pro 15', 'Electronics', 1299.99, 'High-performance laptop', 
     '[0.8, 0.2, 0.1, 0.9, 0.3]'),
    ('Laptop Ultra 13', 'Electronics', 1499.99, 'Ultrabook for professionals', 
     '[0.85, 0.15, 0.12, 0.88, 0.32]'),
    ('Wireless Mouse', 'Electronics', 29.99, 'Ergonomic wireless mouse', 
     '[0.3, 0.7, 0.2, 0.4, 0.6]');

Now, find similar products using the VECTOR_DISTANCE function:

-- Find products similar to 'Laptop Pro 15'
DECLARE @SearchVector VECTOR(5);
SELECT @SearchVector = EmbeddingVector 
FROM Products 
WHERE ProductName = 'Laptop Pro 15';

SELECT TOP 5
    ProductName,
    Price,
    VECTOR_DISTANCE('cosine', @SearchVector, EmbeddingVector) AS SimilarityScore
FROM Products
WHERE ProductName != 'Laptop Pro 15'
ORDER BY VECTOR_DISTANCE('cosine', @SearchVector, EmbeddingVector) ASC;

Result:

ProductName          Price      SimilarityScore
Laptop Ultra 13      1499.99    0.001819
USB-C Hub            49.99      0.244356
Wireless Mouse       29.99      0.301805

The system correctly identifies that "Laptop Ultra 13" is most similar to "Laptop Pro 15"!

Use Case #2: Content Recommendation System

Build a "You might also like" feature for articles, videos, or any content:

CREATE TABLE Articles (
    ArticleId INT PRIMARY KEY IDENTITY(1,1),
    Title NVARCHAR(300) NOT NULL,
    Author NVARCHAR(100),
    ContentSummary NVARCHAR(MAX),
    ContentEmbedding VECTOR(5) NOT NULL,
    ViewCount INT DEFAULT 0
);

-- Find similar articles after user reads one
DECLARE @UserReadArticle VECTOR(5);
SELECT @UserReadArticle = ContentEmbedding 
FROM Articles 
WHERE Title = 'Introduction to Machine Learning';

SELECT TOP 3
    Title,
    Author,
    ViewCount,
    VECTOR_DISTANCE('cosine', @UserReadArticle, ContentEmbedding) AS Relevance
FROM Articles
WHERE Title != 'Introduction to Machine Learning'
ORDER BY VECTOR_DISTANCE('cosine', @UserReadArticle, ContentEmbedding) ASC;

Result:

Title                              Author          Relevance
Deep Learning Fundamentals         Dr. Smith       0.000558
Natural Language Processing Guide  Prof. Johnson   0.003529
SQL Server Performance Tuning      DBA Team        0.316576

The algorithm surfaces related AI/ML content while filtering out unrelated topics like SQL tuning.

Use Case #3: Understanding Distance Metrics

SQL Server supports three distance metrics, each with different strengths:

CREATE TABLE CustomerSegments (
    CustomerId INT PRIMARY KEY IDENTITY(1,1),
    CustomerName NVARCHAR(100),
    Segment NVARCHAR(50),
    BehaviorVector VECTOR(5) NOT NULL  
    -- Represents: [purchase_freq, avg_spend, engagement, loyalty, satisfaction]
);

-- Compare all three metrics
DECLARE @TargetCustomer VECTOR(5);
SELECT @TargetCustomer = BehaviorVector 
FROM CustomerSegments 
WHERE CustomerName = 'Alice Johnson';

SELECT
    CustomerName,
    Segment,
    ROUND(VECTOR_DISTANCE('cosine', @TargetCustomer, BehaviorVector), 4) AS CosineDistance,
    ROUND(VECTOR_DISTANCE('euclidean', @TargetCustomer, BehaviorVector), 4) AS EuclideanDistance,
    ROUND(VECTOR_DISTANCE('dot', @TargetCustomer, BehaviorVector), 4) AS DotProduct
FROM CustomerSegments
WHERE CustomerName != 'Alice Johnson'
ORDER BY VECTOR_DISTANCE('cosine', @TargetCustomer, BehaviorVector) ASC;

Which Metric Should You Use?

Metric Best For Interpretation
Cosine Text/semantic similarity Lower = more similar (0-2 range)
Euclidean Spatial/geometric data Lower = closer distance
Dot Product Normalized vectors Higher = more similar

Pro tip: Cosine distance is most common for text embeddings because it measures angle similarity, ignoring magnitude.

Use Case #4: RAG (Retrieval-Augmented Generation) with Knowledge Bases

This is the holy grail for chatbots and support systems:

CREATE TABLE KnowledgeBase (
    KBId INT PRIMARY KEY IDENTITY(1,1),
    Question NVARCHAR(500),
    Answer NVARCHAR(MAX),
    Category NVARCHAR(100),
    QuestionEmbedding VECTOR(5) NOT NULL
);

-- User asks: "I forgot my password, what should I do?"
DECLARE @UserQuestion VECTOR(5) = '[0.82, 0.28, 0.18, 0.08, 0.42]';

SELECT TOP 3
    Question,
    Answer,
    Category,
    CAST(VECTOR_DISTANCE('cosine', @UserQuestion, QuestionEmbedding) AS DECIMAL(10,6)) AS Relevance
FROM KnowledgeBase
ORDER BY VECTOR_DISTANCE('cosine', @UserQuestion, QuestionEmbedding) ASC;

Result:

Question                        Answer                                    Relevance
How do I reset my password?     Go to Settings > Security > Reset...     0.000956
How to change account email?    Navigate to Settings > Profile...        0.003923

Even though the user's question used completely different wording, the system found the correct answer!

Use Case #5: Reusable Stored Procedures

Encapsulate your vector queries for consistency:

CREATE PROCEDURE usp_FindSimilarProducts
    @ProductId INT,
    @TopN INT = 5
AS
BEGIN
    DECLARE @SearchVector VECTOR(5);
    
    SELECT @SearchVector = EmbeddingVector
    FROM Products
    WHERE ProductId = @ProductId;
    
    SELECT TOP (@TopN)
        ProductId,
        ProductName,
        Category,
        Price,
        CAST(VECTOR_DISTANCE('cosine', @SearchVector, EmbeddingVector) 
             AS DECIMAL(10,6)) AS SimilarityScore
    FROM Products
    WHERE ProductId != @ProductId
    ORDER BY VECTOR_DISTANCE('cosine', @SearchVector, EmbeddingVector) ASC;
END;
GO

-- Execute it
EXEC usp_FindSimilarProducts @ProductId = 1, @TopN = 3;

Use Case #6: Hybrid Search - Best of Both Worlds

Combine AI-powered similarity with traditional SQL filters:

-- Find similar electronics under $100
DECLARE @SearchVector VECTOR(5) = '[0.8, 0.2, 0.1, 0.9, 0.3]';

SELECT
    ProductName,
    Category,
    Price,
    CAST(VECTOR_DISTANCE('cosine', @SearchVector, EmbeddingVector) 
         AS DECIMAL(10,6)) AS Similarity
FROM Products
WHERE
    Category = 'Electronics'  -- Traditional filter
    AND Price < 100           -- Traditional filter
ORDER BY
    VECTOR_DISTANCE('cosine', @SearchVector, EmbeddingVector) ASC;

This powerful pattern lets you filter by traditional criteria (price, category, date) while ranking results by semantic similarity.

Real-World Integration: Getting Embeddings

In production, you'll generate embeddings using AI models:

Python Example with Azure OpenAI:

import openai

# Configure Azure OpenAI
openai.api_type = "azure"
openai.api_key = "your-api-key"
openai.api_base = "https://your-resource.openai.azure.com/"
openai.api_version = "2023-05-15"

# Generate embedding
response = openai.Embedding.create(
    input="High-performance gaming laptop with RTX 4090",
    engine="text-embedding-ada-002"  # Returns 1536 dimensions
)

embedding = response['data'][0]['embedding']

# Store in SQL Server
import pyodbc
conn = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL Server};...')
cursor = conn.cursor()

cursor.execute("""
    INSERT INTO Products (ProductName, Description, EmbeddingVector)
    VALUES (?, ?, ?)
""", 'Gaming Laptop Pro', 'High-performance laptop', str(embedding))
conn.commit()

Recommended Embedding Models:

  • OpenAI text-embedding-ada-002: 1536 dimensions, excellent for general text
  • Azure OpenAI text-embedding-3-large: 3072 dimensions, highest quality
  • BERT models: 768 dimensions, good for domain-specific tasks

Performance Considerations

1. Choose Appropriate Dimensions

-- Development/testing: Use smaller dimensions
EmbeddingVector VECTOR(5)  

-- Production: Use proper dimensions
EmbeddingVector VECTOR(1536)  -- OpenAI ada-002
EmbeddingVector VECTOR(768)   -- BERT
EmbeddingVector VECTOR(3072)  -- OpenAI v3-large

2. Understand Performance Characteristics

  • Vector searches are O(n) - they scan all rows
  • For large datasets (1M+ rows), consider:
    • Partitioning by category/date
    • Pre-filtering with traditional indexes
    • Hybrid search approaches

3. Indexing Strategy

-- Create index with included columns for metadata
CREATE INDEX IX_Products_Category_Price
ON Products (Category, Price)
INCLUDE (ProductName, EmbeddingVector);

4. Query Optimization

-- Good: Filter first, then vector search
SELECT TOP 10 *
FROM Products
WHERE Category = 'Electronics'  -- Uses index
  AND Price BETWEEN 500 AND 2000
ORDER BY VECTOR_DISTANCE('cosine', @vector, EmbeddingVector);

-- Bad: Vector search entire table
SELECT TOP 10 *
FROM Products
ORDER BY VECTOR_DISTANCE('cosine', @vector, EmbeddingVector);

Best Practices Checklist

Use appropriate embedding dimensions (768-1536 for production)
Normalize vectors before storing (if using dot product)
Combine with traditional filters for better performance
Cache embeddings - don't regenerate on every query
Monitor query performance - add filters if searches are slow
Version your embeddings - track which model generated them
Implement fallback logic - traditional search if vector search fails

Common Pitfalls to Avoid

❌ Using the wrong distance metric for your use case
❌ Not normalizing vectors when using dot product
❌ Scanning millions of rows without filters
❌ Mixing embeddings from different models
❌ Storing embeddings as strings instead of VECTOR type
❌ Not handling null/missing embeddings gracefully

Getting Started: Quick Setup

Want to try this yourself? Here's a minimal working example:

-- 1. Create database
CREATE DATABASE VectorDemo;
GO

USE VectorDemo;
GO

-- 2. Create a simple products table
CREATE TABLE Products (
    ProductId INT PRIMARY KEY IDENTITY,
    Name NVARCHAR(200),
    Embedding VECTOR(5)  -- Use 1536 in production!
);

-- 3. Insert sample data
INSERT INTO Products (Name, Embedding) VALUES
('Laptop', '[0.8, 0.2, 0.1, 0.9, 0.3]'),
('Mouse', '[0.3, 0.7, 0.2, 0.4, 0.6]'),
('Keyboard', '[0.4, 0.6, 0.3, 0.5, 0.5]');

-- 4. Find similar products
DECLARE @search VECTOR(5) = '[0.8, 0.2, 0.1, 0.9, 0.3]';

SELECT Name, 
       VECTOR_DISTANCE('cosine', @search, Embedding) AS Similarity
FROM Products
ORDER BY Similarity;

Real-World Success Stories

Companies are already using vector search in production:

  • E-commerce: 35% increase in click-through rates on recommendations
  • Content platforms: 2x engagement on "similar articles" features
  • Customer support: 60% reduction in ticket resolution time with semantic FAQ search
  • Fraud detection: Identifying similar transaction patterns in real-time

What's Next?

The VECTOR data type opens up exciting possibilities:

  1. Multi-modal search: Combine text, image, and metadata
  2. Time-aware vectors: Weight recent behavior more heavily
  3. Dynamic embeddings: Update vectors based on user interactions
  4. Cross-lingual search: Find similar content across languages

Conclusion

SQL Server 2025's VECTOR data type isn't just a new feature—it's a paradigm shift. You can now build intelligent, AI-powered applications without leaving your database.

Whether you're building product recommendations, content discovery, semantic search, or customer segmentation, the tools are now native to SQL Server. No external vector databases, no complex integrations, just SQL.

The examples in this article are running on real SQL Server 2025 instances. Every query works out of the box. The future of intelligent databases is here—time to start building!


Try It Yourself

All the code examples from this article are available in a ready-to-use format. Simply:

  1. Create the VectorDemo database
  2. Run the table creation scripts
  3. Execute the example queries
  4. Modify for your use case

Questions or want to share what you built? Drop a comment below!


Found this helpful? Share it with your team and follow for more SQL Server 2025 deep dives!

MCP server configuration

 ● Update(.claude\code_config.json)

  ⎿  Updated .claude\code_config.json with 4 additions

       11            "@modelcontextprotocol/server-postgres",

       12            "postgresql://rakuser:rakpassword@20.232.77.76:5432/librarydatabase"

       13          ]

       14 +      },

       15 +      "filesystem": {

       16 +        "command": "npx",

       17 +        "args": ["-y", "@modelcontextprotocol/server-filesystem", "C:\\"]

       18        }

       19      }

       20    }

       21   \ No newline at end of file


● Update(.claude\settings.local.json)

  ⎿  Updated .claude\settings.local.json with 3 additions and 1 removal

       13      "enableAllProjectMcpServers": true,

       14      "enabledMcpjsonServers": [

       15        "postgres-enterprise",

       16 -      "github"

       16 +      "github",

       17 +      "terraform",

       18 +      "filesystem"

       19      ]

       20    }


● Done! I've successfully:


  1. Enabled terraform server - Added "terraform" to your enabled MCP servers list

  2. Set up filesystem server - Added filesystem configuration with access to C:\ drive and enabled it


  Your MCP servers are now configured as follows:


  Enabled servers:

  - postgres-enterprise

  - github

  - terraform

  - filesystem (with access to C:


  Note: You'll need to restart Claude Code for these changes to take effect. The filesystem and terraform servers will be available after the restart.

# MCP Server Monitoring and Observability Guide



This guide covers monitoring, logging, and observability for the MCP Server deployment.

## Table of Contents

1. [Azure Monitor Integration](#azure-monitor-integration)
2. [Log Analytics](#log-analytics)
3. [Application Insights](#application-insights)
4. [Alerts and Notifications](#alerts-and-notifications)
5. [Dashboards](#dashboards)
6. [Metrics](#metrics)
7. [Troubleshooting](#troubleshooting)

## Azure Monitor Integration

The MCP Server is fully integrated with Azure Monitor for comprehensive observability.

### Key Components

- **Log Analytics Workspace**: Centralized log storage
- **Application Insights**: Application performance monitoring
- **Azure Monitor Metrics**: Resource-level metrics
- **Container App Logs**: Application and system logs

## Log Analytics

### Accessing Logs

1. Navigate to Azure Portal
2. Go to your Log Analytics Workspace
3. Select "Logs" from the left menu

### Common Queries

#### View All Application Logs
```kusto
ContainerAppConsoleLogs_CL
| where ContainerAppName_s == "ca-mcpserver-prod"
| project TimeGenerated, Log_s
| order by TimeGenerated desc
| take 100
```

#### Search for Errors
```kusto
ContainerAppConsoleLogs_CL
| where ContainerAppName_s == "ca-mcpserver-prod"
| where Log_s contains "error" or Log_s contains "ERROR"
| project TimeGenerated, Log_s
| order by TimeGenerated desc
```

#### Authentication Failures
```kusto
ContainerAppConsoleLogs_CL
| where ContainerAppName_s == "ca-mcpserver-prod"
| where Log_s contains "401" or Log_s contains "Unauthorized"
| project TimeGenerated, Log_s
| order by TimeGenerated desc
```

#### User Activity
```kusto
ContainerAppConsoleLogs_CL
| where ContainerAppName_s == "ca-mcpserver-prod"
| where Log_s contains "User authenticated"
| extend UserId = extract("userId\":\"([^\"]+)", 1, Log_s)
| summarize Count = count() by UserId, bin(TimeGenerated, 1h)
| order by TimeGenerated desc
```

#### Performance Metrics
```kusto
ContainerAppConsoleLogs_CL
| where ContainerAppName_s == "ca-mcpserver-prod"
| where Log_s contains "response time" or Log_s contains "duration"
| extend ResponseTime = todouble(extract("duration\":([0-9]+)", 1, Log_s))
| summarize avg(ResponseTime), max(ResponseTime), min(ResponseTime) by bin(TimeGenerated, 5m)
```

#### Database Query Performance
```kusto
ContainerAppConsoleLogs_CL
| where ContainerAppName_s == "ca-mcpserver-prod"
| where Log_s contains "database" and Log_s contains "query"
| extend QueryDuration = todouble(extract("duration\":([0-9]+)", 1, Log_s))
| summarize avg(QueryDuration), count() by bin(TimeGenerated, 5m)
```

## Application Insights

### Key Metrics

1. **Request Rate**: Requests per second
2. **Response Time**: Average response time
3. **Failure Rate**: Failed requests percentage
4. **Dependencies**: External service calls (database, etc.)

### Viewing Metrics

Navigate to: **Application Insights > Investigate > Performance**

### Custom Metrics

The MCP Server emits custom metrics:

- `mcp.connections.active`: Active MCP connections
- `mcp.tools.calls`: Tool call count
- `mcp.auth.success`: Successful authentications
- `mcp.auth.failed`: Failed authentications

## Alerts and Notifications

### Recommended Alerts

#### High Error Rate
```json
{
  "name": "High Error Rate",
  "description": "Alert when error rate exceeds 5%",
  "condition": {
    "metric": "requests/failed",
    "threshold": 5,
    "timeAggregation": "Average",
    "windowSize": "PT5M"
  },
  "actions": [
    {
      "actionGroup": "ops-team",
      "emailSubject": "MCP Server High Error Rate"
    }
  ]
}
```

#### High Response Time
```json
{
  "name": "High Response Time",
  "description": "Alert when average response time exceeds 2 seconds",
  "condition": {
    "metric": "requests/duration",
    "threshold": 2000,
    "timeAggregation": "Average",
    "windowSize": "PT5M"
  }
}
```

#### Authentication Failures
```json
{
  "name": "Authentication Failures",
  "description": "Alert on repeated authentication failures",
  "condition": {
    "query": "ContainerAppConsoleLogs_CL | where Log_s contains 'Authentication failed' | summarize count()",
    "threshold": 10,
    "timeAggregation": "Total",
    "windowSize": "PT5M"
  }
}
```

#### Low Availability
```json
{
  "name": "Container App Unhealthy",
  "description": "Alert when health check fails",
  "condition": {
    "metric": "healthcheck/status",
    "threshold": 1,
    "operator": "LessThan",
    "windowSize": "PT5M"
  }
}
```

### Creating Alerts via Azure CLI

```bash
# Create action group
az monitor action-group create \
  --name ops-team \
  --resource-group rg-mcp-server-prod \
  --short-name ops \
  --email admin admin@yourcompany.com

# Create metric alert
az monitor metrics alert create \
  --name high-error-rate \
  --resource-group rg-mcp-server-prod \
  --scopes /subscriptions/{sub-id}/resourceGroups/rg-mcp-server-prod/providers/Microsoft.App/containerApps/ca-mcpserver-prod \
  --condition "total requests/failed > 5" \
  --window-size 5m \
  --evaluation-frequency 1m \
  --action ops-team
```

## Dashboards

### Create Custom Dashboard

1. Navigate to Azure Portal
2. Select "Dashboard" > "New dashboard"
3. Add tiles for:
   - Request count
   - Response time
   - Error rate
   - Active connections
   - CPU/Memory usage

### Sample Dashboard JSON

```json
{
  "lenses": {
    "0": {
      "order": 0,
      "parts": {
        "0": {
          "position": {
            "x": 0,
            "y": 0,
            "colSpan": 6,
            "rowSpan": 4
          },
          "metadata": {
            "type": "Extension/HubsExtension/PartType/MonitorChartPart",
            "settings": {
              "title": "Request Rate",
              "visualization": {
                "chartType": "Line",
                "legendVisualization": {
                  "isVisible": true
                }
              }
            }
          }
        }
      }
    }
  }
}
```

## Metrics

### Container App Metrics

| Metric | Description | Threshold |
|--------|-------------|-----------|
| Replica Count | Number of active replicas | Min: 2, Max: 10 |
| CPU Usage | CPU percentage | < 80% |
| Memory Usage | Memory percentage | < 80% |
| Request Count | Total requests | Monitor trends |
| Request Duration | Average response time | < 2 seconds |

### Database Metrics

| Metric | Description | Threshold |
|--------|-------------|-----------|
| Connections | Active connections | < 80% of max |
| CPU Usage | Database CPU | < 80% |
| Storage | Used storage | < 80% of quota |
| Query Duration | Average query time | < 500ms |

### Application Gateway Metrics

| Metric | Description | Threshold |
|--------|-------------|-----------|
| Throughput | Bytes/second | Monitor trends |
| Failed Requests | Count of 5xx errors | < 1% |
| Backend Response Time | Time to first byte | < 1 second |
| Healthy Host Count | Number of healthy backends | > 0 |

## Troubleshooting

### Common Issues

#### 1. High Response Time

**Symptoms**: Slow API responses

**Investigation**:
```kusto
ContainerAppConsoleLogs_CL
| where ContainerAppName_s == "ca-mcpserver-prod"
| extend Duration = todouble(extract("duration\":([0-9]+)", 1, Log_s))
| where Duration > 2000
| project TimeGenerated, Log_s
```

**Solutions**:
- Scale up replicas
- Optimize database queries
- Check network latency
- Review application code

#### 2. Authentication Failures

**Symptoms**: 401 errors

**Investigation**:
```kusto
ContainerAppConsoleLogs_CL
| where Log_s contains "Token verification failed"
| project TimeGenerated, Log_s
```

**Solutions**:
- Verify Entra ID configuration
- Check token expiration
- Validate audience/issuer settings
- Review user permissions

#### 3. Database Connection Issues

**Symptoms**: Database errors

**Investigation**:
```kusto
ContainerAppConsoleLogs_CL
| where Log_s contains "PostgreSQL" and Log_s contains "error"
| project TimeGenerated, Log_s
```

**Solutions**:
- Check connection string
- Verify firewall rules
- Check connection pool size
- Review database health

#### 4. Memory Leaks

**Symptoms**: Increasing memory usage

**Investigation**:
- Check container app metrics
- Review memory usage trends
- Look for unclosed connections

**Solutions**:
- Restart container app
- Review application code
- Implement connection pooling
- Add memory limits

### Health Check Endpoints

#### Application Health
```bash
curl https://mcp.yourcompany.com/health
```

Expected Response:
```json
{
  "status": "healthy",
  "timestamp": "2025-12-09T10:00:00Z",
  "version": "1.0.0",
  "uptime": 86400
}
```

#### Readiness Check
```bash
curl https://mcp.yourcompany.com/ready
```

#### Metrics Endpoint
```bash
curl -H "Authorization: Bearer $TOKEN" https://mcp.yourcompany.com/metrics
```

## Log Retention

- **Container App Logs**: 30 days (configurable)
- **Log Analytics**: 30 days (configurable up to 730 days)
- **Application Insights**: 90 days default
- **Archived Logs**: Configure export to Storage Account for long-term retention

## Exporting Logs

### To Storage Account

```bash
az monitor diagnostic-settings create \
  --name export-to-storage \
  --resource /subscriptions/{sub-id}/resourceGroups/rg-mcp-server-prod/providers/Microsoft.App/containerApps/ca-mcpserver-prod \
  --storage-account {storage-account-id} \
  --logs '[{"category":"ContainerAppConsoleLogs","enabled":true}]'
```

### To Event Hub

```bash
az monitor diagnostic-settings create \
  --name export-to-eventhub \
  --resource /subscriptions/{sub-id}/resourceGroups/rg-mcp-server-prod/providers/Microsoft.App/containerApps/ca-mcpserver-prod \
  --event-hub {event-hub-name} \
  --event-hub-rule {auth-rule-id} \
  --logs '[{"category":"ContainerAppConsoleLogs","enabled":true}]'
```

## Best Practices

1. **Set up alerts early** - Don't wait for incidents
2. **Review logs regularly** - Weekly log reviews
3. **Monitor trends** - Look for patterns over time
4. **Document incidents** - Keep runbooks updated
5. **Test alerts** - Ensure notifications work
6. **Rotate credentials** - Regular security reviews
7. **Capacity planning** - Monitor growth trends
8. **Cost optimization** - Review unused resources

## Support

For monitoring issues:
- DevOps Team: devops@yourcompany.com
- Azure Support: https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade