Este documento descreve como configurar métricas definidas pelo usuário para escalonamento automático horizontal de pods (HPA) no Google Distributed Cloud.
Esta página é destinada a administradores, arquitetos e operadores que otimizam a arquitetura e os recursos de sistemas para garantir o menor custo total de propriedade para a empresa ou unidade de negócios, além de planejar as necessidades de capacidade e infraestrutura. Para saber mais sobre papéis comuns e tarefas de exemplo referenciados no conteúdo do Google Cloud , consulte Tarefas e funções de usuário comuns do GKE.
Implantar o Prometheus e o Metrics Adapter
Nesta seção, você implanta o Prometheus para extrair métricas definidas pelo usuário e o prometheus-adapter para atender à API Kubernetes Custom Metrics com o Prometheus como no back-end.
Salve os seguintes manifestos de implantação em um arquivo chamado custom-metrics-adapter.yaml.
Conteúdo do arquivo de manifesto para o Prometheus e o Metrics Adapter
# Copyright 2018 Google Inc
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: ServiceAccount
metadata:
name: stackdriver-prometheus
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: stackdriver-prometheus
namespace: kube-system
rules:
- apiGroups:
- ""
resources:
- nodes
- services
- endpoints
- pods
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: stackdriver-prometheus
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: stackdriver-prometheus
subjects:
- kind: ServiceAccount
name: stackdriver-prometheus
namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
name: stackdriver-prometheus-app
namespace: kube-system
labels:
app: stackdriver-prometheus-app
spec:
clusterIP: "None"
ports:
- name: http
port: 9090
protocol: TCP
targetPort: 9090
sessionAffinity: ClientIP
selector:
app: stackdriver-prometheus-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: stackdriver-prometheus-app
namespace: kube-system
labels:
app: stackdriver-prometheus-app
spec:
replicas: 1
selector:
matchLabels:
app: stackdriver-prometheus-app
template:
metadata:
labels:
app: stackdriver-prometheus-app
spec:
serviceAccount: stackdriver-prometheus
containers:
- name: prometheus-server
image: prom/prometheus:v2.45.0
args:
- "--config.file=/etc/prometheus/config/prometheus.yaml"
- "--storage.tsdb.path=/data"
- "--storage.tsdb.retention.time=2h"
ports:
- name: prometheus
containerPort: 9090
readinessProbe:
httpGet:
path: /-/ready
port: 9090
periodSeconds: 5
timeoutSeconds: 3
# Allow up to 10m on startup for data recovery
failureThreshold: 120
livenessProbe:
httpGet:
path: /-/healthy
port: 9090
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 6
resources:
requests:
cpu: 250m
memory: 500Mi
volumeMounts:
- name: config-volume
mountPath: /etc/prometheus/config
- name: stackdriver-prometheus-app-data
mountPath: /data
volumes:
- name: config-volume
configMap:
name: stackdriver-prometheus-app
- name: stackdriver-prometheus-app-data
emptyDir: {}
terminationGracePeriodSeconds: 300
nodeSelector:
kubernetes.io/os: linux
---
apiVersion: v1
data:
prometheus.yaml: |
global:
scrape_interval: 1m
rule_files:
- /etc/config/rules.yaml
- /etc/config/alerts.yaml
scrape_configs:
- job_name: prometheus-io-endpoints
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- action: keep
regex: true
source_labels:
- __meta_kubernetes_service_annotation_prometheus_io_scrape
- action: replace
regex: (.+)
source_labels:
- __meta_kubernetes_service_annotation_prometheus_io_path
target_label: __metrics_path__
- action: replace
regex: (https?)
source_labels:
- __meta_kubernetes_service_annotation_prometheus_io_scheme
target_label: __scheme__
- action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
source_labels:
- __address__
- __meta_kubernetes_service_annotation_prometheus_io_port
target_label: __address__
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: namespace
- action: replace
source_labels:
- __meta_kubernetes_pod_name
target_label: pod
- action: keep
regex: (.+)
source_labels:
- __meta_kubernetes_endpoint_port_name
- job_name: prometheus-io-services
kubernetes_sd_configs:
- role: service
metrics_path: /probe
params:
module:
- http_2xx
relabel_configs:
- action: replace
source_labels:
- __address__
target_label: __param_target
- action: replace
replacement: blackbox
target_label: __address__
- action: keep
regex: true
source_labels:
- __meta_kubernetes_service_annotation_prometheus_io_probe
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: namespace
- action: replace
source_labels:
- __meta_kubernetes_pod_name
target_label: pod
- job_name: prometheus-io-pods
kubernetes_sd_configs:
- role: pod
relabel_configs:
- action: keep
regex: true
source_labels:
- __meta_kubernetes_pod_annotation_prometheus_io_scrape
- action: replace
regex: (.+)
source_labels:
- __meta_kubernetes_pod_annotation_prometheus_io_path
target_label: __metrics_path__
- action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
source_labels:
- __address__
- __meta_kubernetes_pod_annotation_prometheus_io_port
target_label: __address__
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: namespace
- action: replace
source_labels:
- __meta_kubernetes_pod_name
target_label: pod
kind: ConfigMap
metadata:
name: stackdriver-prometheus-app
namespace: kube-system
---
# The main section of custom metrics adapter.
kind: ServiceAccount
apiVersion: v1
metadata:
name: custom-metrics-apiserver
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: custom-metrics:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: custom-metrics-apiserver
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: custom-metrics-server-resources
rules:
- apiGroups:
- custom.metrics.k8s.io
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: custom-metrics-resource-reader
rules:
- apiGroups:
- ""
resources:
- nodes
- namespaces
- pods
- services
verbs:
- get
- watch
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: custom-metrics-resource-reader
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: custom-metrics-resource-reader
subjects:
- kind: ServiceAccount
name: custom-metrics-apiserver
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: custom-metrics-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: custom-metrics-apiserver
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: adapter-config
namespace: kube-system
data:
config.yaml: |
rules:
default: false
# fliter all metrics
- seriesQuery: '{pod=~".+"}'
seriesFilters: []
resources:
# resource name is mapped as it is. ex. namespace -> namespace
template: <<.Resource>>
name:
matches: ^(.*)$
as: ""
# Aggregate metric on resource level
metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>}) by (<<.GroupBy>>)
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: custom-metrics-apiserver
name: custom-metrics-apiserver
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: custom-metrics-apiserver
template:
metadata:
labels:
app: custom-metrics-apiserver
name: custom-metrics-apiserver
spec:
serviceAccountName: custom-metrics-apiserver
containers:
- name: custom-metrics-apiserver
resources:
requests:
cpu: 15m
memory: 20Mi
limits:
cpu: 100m
memory: 150Mi
image: registry.k8s.io/prometheus-adapter/prometheus-adapter:v0.11.0
args:
- /adapter
- --cert-dir=/var/run/serving-cert
- --secure-port=6443
- --prometheus-url=http://stackdriver-prometheus-app.kube-system.svc:9090/
- --metrics-relist-interval=1m
- --config=/etc/adapter/config.yaml
ports:
- containerPort: 6443
volumeMounts:
- name: serving-cert
mountPath: /var/run/serving-cert
- mountPath: /etc/adapter/
name: config
readOnly: true
nodeSelector:
kubernetes.io/os: linux
volumes:
- name: serving-cert
emptyDir:
medium: Memory
- name: config
configMap:
name: adapter-config
---
apiVersion: v1
kind: Service
metadata:
name: custom-metrics-apiserver
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 6443
selector:
app: custom-metrics-apiserver
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: v1beta1.custom.metrics.k8s.io
spec:
service:
name: custom-metrics-apiserver
namespace: kube-system
group: custom.metrics.k8s.io
version: v1beta1
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 100
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: v1beta2.custom.metrics.k8s.io
spec:
service:
name: custom-metrics-apiserver
namespace: kube-system
group: custom.metrics.k8s.io
version: v1beta2
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 100
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: hpa-controller-custom-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: custom-metrics-server-resources
subjects:
- kind: ServiceAccount
name: horizontal-pod-autoscaler
namespace: kube-system
Crie a implantação e o serviço:
kubectl --kubeconfig USER_CLUSTER_KUBECONFIG apply -f custom-metrics-adapter.yaml
A próxima etapa é anotar o aplicativo do usuário para a coleta de métricas.
Anotar um aplicativo de usuário para coletar métricas
Para anotar um aplicativo de usuário a ser extraído e os registros enviados ao Cloud Monitoring, adicione annotations correspondentes aos metadados do serviço, pod e endpoints.
metadata:
name: "example-monitoring"
namespace: "default"
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: "" - Overriding metrics path (default "/metrics")
Implantar um aplicativo de usuário de amostra
Nesta seção, você implantará um aplicativo de amostra com registros e métricas compatíveis com o Prometheus.
Salve os seguintes manifestos de Serviço e Implantação em um arquivo chamado
my-app.yaml. Observe que o Serviço tem a anotaçãoprometheus.io/scrape: "true":kind: Service apiVersion: v1 metadata: name: "example-monitoring" namespace: "default" annotations: prometheus.io/scrape: "true" spec: selector: app: "example-monitoring" ports: - name: http port: 9090 --- apiVersion: apps/v1 kind: Deployment metadata: name: "example-monitoring" namespace: "default" labels: app: "example-monitoring" spec: replicas: 1 selector: matchLabels: app: "example-monitoring" template: metadata: labels: app: "example-monitoring" spec: containers: - image: gcr.io/google-samples/prometheus-dummy-exporter:v0.2.0 name: prometheus-example-exporter command: - ./prometheus-dummy-exporter args: - --metric-name=example_monitoring_up - --metric-value=1 - --port=9090 resources: requests: cpu: 100mCrie a implantação e o serviço:
kubectl --kubeconfig USER_CLUSTER_KUBECONFIG apply -f my-app.yaml
Usar as métricas personalizadas no HPA
Implante o objeto do HPA para usar a métrica exposta na etapa anterior. Saiba mais sobre os diversos tipos de métricas personalizadas em Como fazer escalonamento automático em métricas diversas e personalizadas.
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: example-monitoring-hpa
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: example-monitoring
minReplicas: 1
maxReplicas: 5
metrics:
- type: Pods
pods:
metric:
name: example_monitoring_up
target:
type: AverageValue
averageValue: 20
A métrica do tipo pods tem um seletor de métricas padrão para os rótulos dos pods de destino, e é assim que o kube-controller-manager funciona. Neste exemplo, você consulta a métrica example_monitoring_up com um seletor de
{matchLabels: {app: example-monitoring}}, porque eles estão disponíveis nos pods de destino. Qualquer outro seletor especificado será adicionado à lista. Para evitar o seletor padrão, remova todos os rótulos do pod de destino ou use a métrica "Tipo de objeto".
Verificar se as métricas do aplicativo definidas pelo usuário são usadas pelo HPA
Verifique se as métricas do aplicativo definidas pelo usuário são usadas pelo HPA:
kubectl --kubeconfig=USER_CLUSTER_KUBECONFIG describe hpa example-monitoring-hpa
A saída será semelhante ao seguinte:
Name: example-monitoring-hpa Namespace: default Labels:Annotations: autoscaling.alpha.kubernetes.io/conditions: [{"type":"AbleToScale","status":"True","lastTransitionTime":"2023-08-23T22:07:24Z","reason":"ReadyForNewScale","message":"recommended size... autoscaling.alpha.kubernetes.io/current-metrics: [{"type":"Pods","pods":{"metricName":"example_monitoring_up","currentAverageValue":"1"}}] autoscaling.alpha.kubernetes.io/metrics: [{"type":"Pods","pods":{"metricName":"example_monitoring_up","targetAverageValue":"20"}}] CreationTimestamp: Wed, 23 Aug 2023 22:07:09 +0000 Reference: Deployment/example-monitoring Min replicas: 1 Max replicas: 5 Deployment pods: 1 current / 1 desired
Custos
O uso de métricas personalizadas para o HPA não gera cobranças extras do Cloud Monitoring. Os pods para ativar métricas personalizadas consomem CPU e memória adicionais com base na quantidade de métricas coletadas.