This guide provides comprehensive instructions for deploying the Budget Management API to production using various deployment strategies. The application supports three deployment strategies:
| Strategy | Downtime | Resource Usage | Rollback Speed | Risk Level | Use Case |
|---|---|---|---|---|---|
| Rolling | Minimal | Low | Medium | Medium | Regular updates |
| Blue-Green | Zero | High (2x) | Instant | Low | Critical releases |
| Canary | Zero | Medium | Fast | Very Low | Testing in production |
Blue-Green deployment maintains two identical production environments:
Traffic is switched from Blue to Green only after validation passes.
# Backend Blue Deployment
kubernetes/backend-deployment-blue.yaml
# Backend Green Deployment
kubernetes/backend-deployment-green.yaml
# Frontend Blue Deployment
kubernetes/frontend-deployment-blue.yaml
# Frontend Green Deployment
kubernetes/frontend-deployment-green.yaml
# Services for Blue-Green
kubernetes/backend-service-blue-green.yaml
kubernetes/frontend-service-blue-green.yaml
# If blue is active, deploy to green
kubectl apply -f kubernetes/backend-deployment-green.yaml
kubectl apply -f kubernetes/frontend-deployment-green.yaml
kubectl rollout status deployment/backend-deployment-green -n production
kubectl rollout status deployment/frontend-deployment-green -n production
# Test the green environment using service-green endpoints
curl http://backend-service-green:3000/health
# Update service selector to point to green
kubectl patch service backend-service -p '{"spec":{"selector":{"version":"green"}}}'
kubectl patch service frontend-service -p '{"spec":{"selector":{"version":"green"}}}'
# Monitor logs and metrics
kubectl logs -f deployment/backend-deployment-green -n production
# After successful validation
kubectl scale deployment/backend-deployment-blue --replicas=0 -n production
kubectl scale deployment/frontend-deployment-blue --replicas=0 -n production
Instant rollback by switching service selector:
kubectl patch service backend-service -p '{"spec":{"selector":{"version":"blue"}}}'
kubectl patch service frontend-service -p '{"spec":{"selector":{"version":"blue"}}}'
Canary deployment gradually shifts traffic from stable to canary version:
Traffic split is controlled by replica count.
# Backend Stable Deployment (9 replicas = 90% traffic)
kubernetes/backend-deployment-canary-stable.yaml
# Backend Canary Deployment (1 replica = 10% traffic)
kubernetes/backend-deployment-canary.yaml
# Frontend Stable Deployment
kubernetes/frontend-deployment-canary-stable.yaml
# Frontend Canary Deployment
kubernetes/frontend-deployment-canary.yaml
# Services for Canary
kubernetes/backend-service-canary.yaml
kubernetes/frontend-service-canary.yaml
kubectl apply -f kubernetes/backend-deployment-canary.yaml
kubectl apply -f kubernetes/frontend-deployment-canary.yaml
kubectl scale deployment/backend-deployment-canary --replicas=1 -n production
kubectl scale deployment/frontend-deployment-canary --replicas=1 -n production
kubectl scale deployment/backend-deployment-stable --replicas=9 -n production
kubectl scale deployment/frontend-deployment-stable --replicas=9 -n production
# Monitor canary metrics for 5-10 minutes
kubectl logs -f deployment/backend-deployment-canary -n production
# Check Prometheus metrics
# Error rate, latency, success rate
# Increase to 50% traffic
kubectl scale deployment/backend-deployment-canary --replicas=5 -n production
kubectl scale deployment/backend-deployment-stable --replicas=5 -n production
# Update stable deployment with canary image
kubectl set image deployment/backend-deployment-stable \
backend=your-registry/tictactoe-backend:new-version
kubectl scale deployment/backend-deployment-stable --replicas=10 -n production
kubectl scale deployment/backend-deployment-canary --replicas=0 -n production
Monitor these key metrics during canary deployment:
# Scale down canary immediately
kubectl scale deployment/backend-deployment-canary --replicas=0 -n production
kubectl scale deployment/frontend-deployment-canary --replicas=0 -n production
# Scale up stable to full capacity
kubectl scale deployment/backend-deployment-stable --replicas=10 -n production
kubectl scale deployment/frontend-deployment-stable --replicas=10 -n production
Rolling deployment gradually replaces old pods with new ones:
maxSurge and maxUnavailablekubectl set image deployment/backend-deployment \
backend=your-registry/tictactoe-backend:new-version
kubectl rollout status deployment/backend-deployment -n production
kubectl get pods -n production -w
kubectl rollout undo deployment/backend-deployment -n production
kubectl rollout status deployment/backend-deployment -n production
The Jenkins pipeline supports all three deployment strategies with parameters:
rolling, blue-green, or canarystaging or productionDEPLOYMENT_STRATEGY=blue-green
ENVIRONMENT=production
RUN_SMOKE_TESTS=true
AUTO_ROLLBACK=true
DEPLOYMENT_STRATEGY=canary
ENVIRONMENT=production
CANARY_PERCENTAGE=10
RUN_SMOKE_TESTS=true
AUTO_ROLLBACK=true
DEPLOYMENT_STRATEGY=rolling
ENVIRONMENT=production
RUN_SMOKE_TESTS=true
AUTO_ROLLBACK=true
The pipeline sends Slack notifications on:
Configure in Jenkinsfile:
environment {
SLACK_CHANNEL = '#deployments'
SLACK_CREDENTIALS_ID = 'slack-webhook'
}
All deployments include:
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 15
periodSeconds: 5
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
Automatically scales based on CPU/Memory:
# Apply HPA
kubectl apply -f kubernetes/hpa.yaml
# View HPA status
kubectl get hpa -n production
Configuration:
Ensures minimum availability during disruptions:
kubectl apply -f kubernetes/pdb.yaml
Maintains minimum 2 pods available during:
Restricts network traffic for security:
kubectl apply -f kubernetes/network-policy.yaml
All pods run with security best practices:
ServiceMonitor configured for metrics scraping:
kubectl apply -f kubernetes/servicemonitor.yaml
Metrics endpoint: /metrics on port 3000/3001
Create dashboards for:
Centralized logging with ELK stack:
Configure alerts for:
Enabled via Jenkins parameter AUTO_ROLLBACK=true
Triggers on:
# Switch back to previous version
kubectl patch service backend-service -p '{"spec":{"selector":{"version":"blue"}}}'
kubectl patch service frontend-service -p '{"spec":{"selector":{"version":"blue"}}}'
# Remove canary traffic
kubectl scale deployment/backend-deployment-canary --replicas=0 -n production
kubectl scale deployment/backend-deployment-stable --replicas=10 -n production
# Rollback to previous revision
kubectl rollout undo deployment/backend-deployment -n production
# Rollback to specific revision
kubectl rollout undo deployment/backend-deployment --to-revision=3 -n production
After rollback:
Symptoms: Pods stuck in ImagePullBackOff state
Solution:
# Check image exists in registry
docker pull your-registry/tictactoe-backend:tag
# Verify registry credentials
kubectl get secret docker-registry-credentials -n production
# Check pod events
kubectl describe pod <pod-name> -n production
Symptoms: Pods constantly restarting
Solution:
# Check pod logs
kubectl logs <pod-name> -n production --previous
# Check resource limits
kubectl describe pod <pod-name> -n production
# Verify environment variables
kubectl get configmap tictactoe-config -o yaml
Symptoms: Service endpoints returning errors
Solution:
# Check service endpoints
kubectl get endpoints backend-service -n production
# Verify pod readiness
kubectl get pods -n production -l app=backend
# Check pod logs
kubectl logs -l app=backend -n production --tail=100
Symptoms: Deployment not progressing
Solution:
# Check rollout status
kubectl rollout status deployment/backend-deployment -n production
# View rollout history
kubectl rollout history deployment/backend-deployment -n production
# Check events
kubectl get events -n production --sort-by='.lastTimestamp'
# Get detailed pod information
kubectl describe pod <pod-name> -n production
# Execute command in pod
kubectl exec -it <pod-name> -n production -- /bin/sh
# View pod logs (live)
kubectl logs -f <pod-name> -n production
# View logs from all pods in deployment
kubectl logs -f deployment/backend-deployment -n production
# Check resource usage
kubectl top pods -n production
kubectl top nodes
# View HPA metrics
kubectl get hpa -n production -w
# Check network connectivity
kubectl run -it --rm debug --image=busybox --restart=Never -- sh
For deployment issues or questions:
Last updated: 2025-11-26