Budget-Management-Backend-API

Deployment Guide - Budget Management API

Table of Contents

  1. Overview
  2. Deployment Strategies
  3. Blue-Green Deployment
  4. Canary Deployment
  5. Rolling Deployment
  6. Jenkins Pipeline
  7. Kubernetes Configuration
  8. Monitoring and Observability
  9. Rollback Procedures
  10. Troubleshooting

Overview

This guide provides comprehensive instructions for deploying the Budget Management API to production using various deployment strategies. The application supports three deployment strategies:

Deployment Strategies

Comparison

Strategy Downtime Resource Usage Rollback Speed Risk Level Use Case
Rolling Minimal Low Medium Medium Regular updates
Blue-Green Zero High (2x) Instant Low Critical releases
Canary Zero Medium Fast Very Low Testing in production

When to Use Each Strategy

Rolling Deployment

Blue-Green Deployment

Canary Deployment

Blue-Green Deployment

Architecture

Blue-Green deployment maintains two identical production environments:

Traffic is switched from Blue to Green only after validation passes.

Kubernetes Manifests

# Backend Blue Deployment
kubernetes/backend-deployment-blue.yaml

# Backend Green Deployment
kubernetes/backend-deployment-green.yaml

# Frontend Blue Deployment
kubernetes/frontend-deployment-blue.yaml

# Frontend Green Deployment
kubernetes/frontend-deployment-green.yaml

# Services for Blue-Green
kubernetes/backend-service-blue-green.yaml
kubernetes/frontend-service-blue-green.yaml

Deployment Process

  1. Deploy to Inactive Environment
    # If blue is active, deploy to green
    kubectl apply -f kubernetes/backend-deployment-green.yaml
    kubectl apply -f kubernetes/frontend-deployment-green.yaml
    
  2. Wait for Rollout
    kubectl rollout status deployment/backend-deployment-green -n production
    kubectl rollout status deployment/frontend-deployment-green -n production
    
  3. Validate New Version
    # Test the green environment using service-green endpoints
    curl http://backend-service-green:3000/health
    
  4. Switch Traffic
    # Update service selector to point to green
    kubectl patch service backend-service -p '{"spec":{"selector":{"version":"green"}}}'
    kubectl patch service frontend-service -p '{"spec":{"selector":{"version":"green"}}}'
    
  5. Monitor and Verify
    # Monitor logs and metrics
    kubectl logs -f deployment/backend-deployment-green -n production
    
  6. Scale Down Old Version
    # After successful validation
    kubectl scale deployment/backend-deployment-blue --replicas=0 -n production
    kubectl scale deployment/frontend-deployment-blue --replicas=0 -n production
    

Rollback

Instant rollback by switching service selector:

kubectl patch service backend-service -p '{"spec":{"selector":{"version":"blue"}}}'
kubectl patch service frontend-service -p '{"spec":{"selector":{"version":"blue"}}}'

Canary Deployment

Architecture

Canary deployment gradually shifts traffic from stable to canary version:

Traffic split is controlled by replica count.

Kubernetes Manifests

# Backend Stable Deployment (9 replicas = 90% traffic)
kubernetes/backend-deployment-canary-stable.yaml

# Backend Canary Deployment (1 replica = 10% traffic)
kubernetes/backend-deployment-canary.yaml

# Frontend Stable Deployment
kubernetes/frontend-deployment-canary-stable.yaml

# Frontend Canary Deployment
kubernetes/frontend-deployment-canary.yaml

# Services for Canary
kubernetes/backend-service-canary.yaml
kubernetes/frontend-service-canary.yaml

Deployment Process

  1. Deploy Canary Version
    kubectl apply -f kubernetes/backend-deployment-canary.yaml
    kubectl apply -f kubernetes/frontend-deployment-canary.yaml
    
  2. Scale Canary (10% traffic)
    kubectl scale deployment/backend-deployment-canary --replicas=1 -n production
    kubectl scale deployment/frontend-deployment-canary --replicas=1 -n production
    kubectl scale deployment/backend-deployment-stable --replicas=9 -n production
    kubectl scale deployment/frontend-deployment-stable --replicas=9 -n production
    
  3. Monitor Canary
    # Monitor canary metrics for 5-10 minutes
    kubectl logs -f deployment/backend-deployment-canary -n production
    
    # Check Prometheus metrics
    # Error rate, latency, success rate
    
  4. Gradually Increase Traffic
    # Increase to 50% traffic
    kubectl scale deployment/backend-deployment-canary --replicas=5 -n production
    kubectl scale deployment/backend-deployment-stable --replicas=5 -n production
    
  5. Full Promotion
    # Update stable deployment with canary image
    kubectl set image deployment/backend-deployment-stable \
      backend=your-registry/tictactoe-backend:new-version
    
    kubectl scale deployment/backend-deployment-stable --replicas=10 -n production
    kubectl scale deployment/backend-deployment-canary --replicas=0 -n production
    

Canary Analysis Metrics

Monitor these key metrics during canary deployment:

Rollback

# Scale down canary immediately
kubectl scale deployment/backend-deployment-canary --replicas=0 -n production
kubectl scale deployment/frontend-deployment-canary --replicas=0 -n production

# Scale up stable to full capacity
kubectl scale deployment/backend-deployment-stable --replicas=10 -n production
kubectl scale deployment/frontend-deployment-stable --replicas=10 -n production

Rolling Deployment

Architecture

Rolling deployment gradually replaces old pods with new ones:

Deployment Process

  1. Update Deployment
    kubectl set image deployment/backend-deployment \
      backend=your-registry/tictactoe-backend:new-version
    
  2. Monitor Rollout
    kubectl rollout status deployment/backend-deployment -n production
    
  3. Verify
    kubectl get pods -n production -w
    

Rollback

kubectl rollout undo deployment/backend-deployment -n production
kubectl rollout status deployment/backend-deployment -n production

Jenkins Pipeline

Using the Pipeline

The Jenkins pipeline supports all three deployment strategies with parameters:

Parameters

Example Usage

  1. Blue-Green Deployment
    DEPLOYMENT_STRATEGY=blue-green
    ENVIRONMENT=production
    RUN_SMOKE_TESTS=true
    AUTO_ROLLBACK=true
    
  2. Canary Deployment (10% traffic)
    DEPLOYMENT_STRATEGY=canary
    ENVIRONMENT=production
    CANARY_PERCENTAGE=10
    RUN_SMOKE_TESTS=true
    AUTO_ROLLBACK=true
    
  3. Rolling Deployment
    DEPLOYMENT_STRATEGY=rolling
    ENVIRONMENT=production
    RUN_SMOKE_TESTS=true
    AUTO_ROLLBACK=true
    

Pipeline Stages

  1. Checkout: Clone repository
  2. Environment Setup: Configure Node.js
  3. Install Dependencies: npm ci
  4. Code Quality & Security: Linting, security audit
  5. Run Tests: Unit and integration tests
  6. Build: Build application
  7. Build Docker Images: Create and push Docker images
  8. Deploy: Execute chosen deployment strategy
  9. Smoke Tests: Validate deployment
  10. Health Check: Verify application health
  11. Performance Tests: Run performance validation

Notifications

The pipeline sends Slack notifications on:

Configure in Jenkinsfile:

environment {
    SLACK_CHANNEL = '#deployments'
    SLACK_CREDENTIALS_ID = 'slack-webhook'
}

Kubernetes Configuration

Production-Ready Features

1. Health Checks

All deployments include:

livenessProbe:
  httpGet:
    path: /health
    port: 3000
  initialDelaySeconds: 30
  periodSeconds: 10

readinessProbe:
  httpGet:
    path: /ready
    port: 3000
  initialDelaySeconds: 15
  periodSeconds: 5

2. Resource Management

resources:
  requests:
    memory: "256Mi"
    cpu: "250m"
  limits:
    memory: "512Mi"
    cpu: "500m"

3. Horizontal Pod Autoscaler (HPA)

Automatically scales based on CPU/Memory:

# Apply HPA
kubectl apply -f kubernetes/hpa.yaml

# View HPA status
kubectl get hpa -n production

Configuration:

4. Pod Disruption Budget (PDB)

Ensures minimum availability during disruptions:

kubectl apply -f kubernetes/pdb.yaml

Maintains minimum 2 pods available during:

5. Network Policies

Restricts network traffic for security:

kubectl apply -f kubernetes/network-policy.yaml

6. Security

All pods run with security best practices:

Monitoring and Observability

Prometheus Integration

ServiceMonitor configured for metrics scraping:

kubectl apply -f kubernetes/servicemonitor.yaml

Metrics endpoint: /metrics on port 3000/3001

Grafana Dashboards

Create dashboards for:

Logging

Centralized logging with ELK stack:

Alerts

Configure alerts for:

Rollback Procedures

Automatic Rollback

Enabled via Jenkins parameter AUTO_ROLLBACK=true

Triggers on:

Manual Rollback

Blue-Green

# Switch back to previous version
kubectl patch service backend-service -p '{"spec":{"selector":{"version":"blue"}}}'
kubectl patch service frontend-service -p '{"spec":{"selector":{"version":"blue"}}}'

Canary

# Remove canary traffic
kubectl scale deployment/backend-deployment-canary --replicas=0 -n production
kubectl scale deployment/backend-deployment-stable --replicas=10 -n production

Rolling

# Rollback to previous revision
kubectl rollout undo deployment/backend-deployment -n production

# Rollback to specific revision
kubectl rollout undo deployment/backend-deployment --to-revision=3 -n production

Rollback Validation

After rollback:

  1. Verify pods are running
  2. Check application health endpoints
  3. Monitor error rates
  4. Review application logs
  5. Validate functionality

Troubleshooting

Common Issues

1. ImagePullBackOff

Symptoms: Pods stuck in ImagePullBackOff state

Solution:

# Check image exists in registry
docker pull your-registry/tictactoe-backend:tag

# Verify registry credentials
kubectl get secret docker-registry-credentials -n production

# Check pod events
kubectl describe pod <pod-name> -n production

2. CrashLoopBackOff

Symptoms: Pods constantly restarting

Solution:

# Check pod logs
kubectl logs <pod-name> -n production --previous

# Check resource limits
kubectl describe pod <pod-name> -n production

# Verify environment variables
kubectl get configmap tictactoe-config -o yaml

3. Service Not Responding

Symptoms: Service endpoints returning errors

Solution:

# Check service endpoints
kubectl get endpoints backend-service -n production

# Verify pod readiness
kubectl get pods -n production -l app=backend

# Check pod logs
kubectl logs -l app=backend -n production --tail=100

4. Deployment Stuck

Symptoms: Deployment not progressing

Solution:

# Check rollout status
kubectl rollout status deployment/backend-deployment -n production

# View rollout history
kubectl rollout history deployment/backend-deployment -n production

# Check events
kubectl get events -n production --sort-by='.lastTimestamp'

Debug Commands

# Get detailed pod information
kubectl describe pod <pod-name> -n production

# Execute command in pod
kubectl exec -it <pod-name> -n production -- /bin/sh

# View pod logs (live)
kubectl logs -f <pod-name> -n production

# View logs from all pods in deployment
kubectl logs -f deployment/backend-deployment -n production

# Check resource usage
kubectl top pods -n production
kubectl top nodes

# View HPA metrics
kubectl get hpa -n production -w

# Check network connectivity
kubectl run -it --rm debug --image=busybox --restart=Never -- sh

Performance Issues

  1. High Response Time
    • Check HPA scaling
    • Review resource limits
    • Analyze database queries
    • Check external service dependencies
  2. Memory Leaks
    • Monitor memory trends
    • Analyze heap dumps
    • Review application code
    • Adjust memory limits
  3. CPU Throttling
    • Increase CPU limits
    • Optimize application code
    • Review CPU-intensive operations
    • Consider vertical scaling

Best Practices

  1. Always test deployments in staging first
  2. Monitor key metrics during deployment
  3. Have rollback plan ready
  4. Use gradual rollout for high-risk changes
  5. Maintain deployment documentation
  6. Keep deployment window during low-traffic periods
  7. Communicate deployments to stakeholders
  8. Perform post-deployment validation
  9. Document lessons learned
  10. Regularly review and update deployment procedures

Additional Resources

Support

For deployment issues or questions:


Last updated: 2025-11-26