Skip to main content
Best Practices
Blueprints
Templates
Search
We found 5 articles tagged with "cloudeye"
View All Tags
Go to Portfolio Navigator
Auto Scaling Based on ELB Monitoring Metrics
By default, Kubernetes scales a workload based on resource usage metrics such as CPU and memory. However, this mechanism cannot reflect the real-time resource usage when traffic bursts arrive, because the collected CPU and memory usage data lags behind the actual load balancer traffic metrics. For some services (such as flash sale and social media) that require fast auto scaling, scaling based on this rule may not be performed in a timely manner and cannot meet these services' actual needs. In this case, auto scaling based on ELB QPS data can respond to service requirements more timely.
Auto Scaling Based on ELB Monitoring Metrics with KEDA
This article demonstrates how to implement auto scaling using KEDA (Kubernetes Event-driven Autoscaling) with ELB monitoring metrics.
Auto Scaling Based on ELB Monitoring Metrics with Prometheus Adapter
This article explains how to implement auto scaling with the Prometheus Adapter using ELB monitoring metrics, allowing the Horizontal Pod Autoscaler (HPA) to use custom metrics sourced from Prometheus.
Configuring Message Accumulation Monitoring
Unprocessed messages accumulate if the client's consumption is slower than the server's sending. When accumulated messages cannot be consumed in time, we can configure alarm rules so that you will be notified when the number of accumulated messages in a consumer group exceeds the threshold.
Resource Group Monitoring
Cloud Eye provides the resource group and alarm functions. How can we effectively group and monitor resources and receive alarm notifications of the resources in different groups?