Skip to main content
Best Practices
Blueprints
Templates
Search
We found 3 articles tagged with "prometheus"
View All Tags
Go to Portfolio Navigator
Auto Scaling Based on ELB Monitoring Metrics
By default, Kubernetes scales a workload based on resource usage metrics such as CPU and memory. However, this mechanism cannot reflect the real-time resource usage when traffic bursts arrive, because the collected CPU and memory usage data lags behind the actual load balancer traffic metrics. For some services (such as flash sale and social media) that require fast auto scaling, scaling based on this rule may not be performed in a timely manner and cannot meet these services' actual needs. In this case, auto scaling based on ELB QPS data can respond to service requirements more timely.
Auto Scaling Based on ELB Monitoring Metrics with KEDA
This article demonstrates how to implement auto scaling using KEDA (Kubernetes Event-driven Autoscaling) with ELB monitoring metrics.
Auto Scaling Based on ELB Monitoring Metrics with Prometheus Adapter
This article explains how to implement auto scaling with the Prometheus Adapter using ELB monitoring metrics, allowing the Horizontal Pod Autoscaler (HPA) to use custom metrics sourced from Prometheus.