借助 Istio 让服务更具弹性 | 周末送福利
Posted 分布式实验室
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了借助 Istio 让服务更具弹性 | 周末送福利相关的知识,希望对你有一定的参考价值。
$ sudo yum install -y git
$ git clone https://github.com/mgxian/istio-lab
Cloning into 'istio-lab'...
remote: Enumerating objects: 252, done.
remote: Counting objects: 100% (252/252), done.
remote: Compressing objects: 100% (177/177), done.
remote: Total 779 (delta 157), reused 166 (delta 74), pack-reused 527
Receiving objects: 100% (779/779), 283.37 KiB | 243.00 KiB/s, done.
Resolving deltas: 100% (451/451), done.
$ cd istio-lab
$ kubectl label namespace default istio-injection=enabled
namespace/default labeled
$ kubectl apply -f service/go/service-go.yaml
$ kubectl get podNAME READY STATUS RESTARTS AGE
service-go-v1-7cc5c6f574-lrp2h 2/2 Running 0 76s
service-go-v2-7656dcc478-svn5c 2/2 Running 0 76s
轮询(ROUND_ROBIN ):把请求依次转发给后端健康实例,默认算法。
最少连接(LEAST_CONN):把请求转发给活跃请求最少的后端健康实例,此处的活跃请求数是 Istio 自己维护的,是 Istio 调用后端实例且正在等待返回响应的请求数,由于实例可能还有其他客户端在调用,没有经过 Istio 统计,所以 Istio 维护的活跃请求数并不是此时实例真正的活跃请求数。
随机(RANDOM):把请求随机转发给后端健康实例。
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: service-go
spec:
host: service-go
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
trafficPolicy:
loadBalancer:
simple: LEAST_CONN
portLevelSettings:
- port:
number: 80
loadBalancer:
simple: RANDOM
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: service-go
spec:
host: service-go
trafficPolicy:
loadBalancer:
consistentHash:
httpHeaderName: x-lb-test
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
$ kubectl apply -f kubernetes/dns-test.yaml
$ kubectl apply -f istio/route/virtual-service-go.yaml
$ kubectl exec dns-test -c dns-test -- curl -s -H "X-lb-test: 1" http://service-go/env
{"message":"go v1"}
$ kubectl exec dns-test -c dns-test -- curl -s -H "X-lb-test: 1" http://service-go/env
{"message":"go v2"}
$ kubectl apply -f istio/resilience/destination-rule-go-lb-hash.yaml
$ kubectl exec dns-test -c dns-test -- curl -s -H "X-lb-test: 1" http://service-go/env
{"message":"go v2"}
$ kubectl exec dns-test -c dns-test -- curl -s -H "X-lb-test: 2" http://service-go/env
{"message":"go v2"}
$ kubectl exec dns-test -c dns-test -- curl -s -H "X-lb-test: 3" http://service-go/env
{"message":"go v1"}
$ kubectl delete -f kubernetes/dns-test.yaml
$ kubectl delete -f istio/route/virtual-service-go.yaml
$ kubectl delete -f istio/resilience/destination-rule-go-lb-hash.yaml
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: service-go
spec:
host: service-go
trafficPolicy:
connectionPool:
tcp:
maxConnections: 10
connectTimeout: 30ms
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: service-go
spec:
host: service-go
trafficPolicy:
connectionPool:
http:
http2MaxRequests: 10
http1MaxPendingRequests: 5
maxRequestsPerConnection: 2
maxRetries: 3
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
$ kubectl apply -f kubernetes/fortio.yaml
$ kubectl apply -f istio/route/virtual-service-go.yaml
$ kubectl apply -f istio/resilience/destination-rule-go-pool-http.yaml
$ kubectl exec fortio -c fortio /usr/local/bin/fortio -- load -curl http://service-go/env
HTTP/1.1 200 OK
content-type: application/json; charset=utf-8
date: Wed, 16 Jan 2019 10:12:35 GMT
content-length: 19
x-envoy-upstream-service-time: 4
server: envoy
{"message":"go v2"}
# 10 并发
$ kubectl exec fortio -c fortio /usr/local/bin/fortio -- load -c 10 -qps 0 -n 100 -loglevel Error http://service-go/env
09:40:38 I logger.go:97> Log level is now 4 Error (was 2 Info)
Fortio 1.0.1 running at 0 queries per second, 2->2 procs, for 100 calls: http://service-go/env
Aggregated Function Time : count 100 avg 0.01652562 +/- 0.013 min 0.002576677 max 0.064653438 sum 1.65256199
# target 50% 0.0119375
# target 75% 0.018
# target 90% 0.035
# target 99% 0.06
# target 99.9% 0.0641881
Sockets used: 15 (for perfect keepalive, would be 10)
Code 200 : 95 (95.0 %)
Code 503 : 5 (5.0 %)
All done 100 calls (plus 0 warmup) 16.526 ms avg, 563.4 qps
# 20 并发
$ kubectl exec fortio -c fortio /usr/local/bin/fortio -- load -c 20 -qps 0 -n 200 -loglevel Error http://service-go/env
09:41:32 I logger.go:97> Log level is now 4 Error (was 2 Info)
Fortio 1.0.1 running at 0 queries per second, 2->2 procs, for 200 calls: http://service-go/env
Aggregated Function Time : count 200 avg 0.023987068 +/- 0.01622 min 0.001995258 max 0.067905383 sum 4.79741353
# target 50% 0.0194286
# target 75% 0.0357692
# target 90% 0.05
# target 99% 0.0626351
# target 99.9% 0.0673784
Sockets used: 43 (for perfect keepalive, would be 20)
Code 200 : 177 (88.5 %)
Code 503 : 23 (11.5 %)
All done 200 calls (plus 0 warmup) 23.987 ms avg, 711.9 qps
# 30 并发
$ kubectl exec fortio -c fortio /usr/local/bin/fortio -- load -c 30 -qps 0 -n 300 -loglevel Error http://service-go/env
09:42:05 I logger.go:97> Log level is now 4 Error (was 2 Info)
Fortio 1.0.1 running at 0 queries per second, 2->2 procs, for 300 calls: http://service-go/env
Aggregated Function Time : count 300 avg 0.034233818 +/- 0.02268 min 0.002354402 max 0.114700368 sum 10.2701455
# target 50% 0.0285417
# target 75% 0.0446667
# target 90% 0.0686957
# target 99% 0.1
# target 99.9% 0.11323
Sockets used: 137 (for perfect keepalive, would be 30)
Code 200 : 192 (64.0 %)
Code 503 : 108 (36.0 %)
All done 300 calls (plus 0 warmup) 34.234 ms avg, 702.1 qps
$ kubectl delete -f kubernetes/fortio.yaml
$ kubectl delete -f istio/route/virtual-service-go.yaml
$ kubectl delete -f istio/resilience/destination-rule-go-pool-http.yaml
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: service-go
spec:
host: service-go
trafficPolicy:
outlierDetection:
consecutiveErrors: 3
interval: 10s
baseEjectionTime: 30s
maxEjectionPercent: 10
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: service-go
spec:
host: service-go
trafficPolicy:
connectionPool:
tcp:
maxConnections: 10
http:
http2MaxRequests: 10
maxRequestsPerConnection: 10
outlierDetection:
consecutiveErrors: 3
interval: 3s
baseEjectionTime: 3m
maxEjectionPercent: 100
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
$ kubectl apply -f kubernetes/fortio.yaml
$ kubectl apply -f istio/route/virtual-service-go.yaml
$ kubectl apply -f istio/resilience/destination-rule-go-cb.yaml
$ kubectl exec fortio -c fortio /usr/local/bin/fortio -- load -curl http://service-go/env
HTTP/1.1 200 OK
content-type: application/json; charset=utf-8
date: Wed, 16 Jan 2019 10:22:35 GMT
content-length: 19
x-envoy-upstream-service-time: 3
server: envoy
{"message":"go v2"}
# 20 并发
$ kubectl exec fortio -c fortio /usr/local/bin/fortio -- load -c 20 -qps 0 -n 200 -loglevel Error http://service-go/env
10:25:21 I logger.go:97> Log level is now 4 Error (was 2 Info)
Fortio 1.0.1 running at 0 queries per second, 2->2 procs, for 200 calls: http://service-go/env
Aggregated Function Time : count 200 avg 0.023687933 +/- 0.01781 min 0.002302379 max 0.082312522 sum 4.73758658
# target 50% 0.0175385
# target 75% 0.029375
# target 90% 0.0533333
# target 99% 0.0766667
# target 99.9% 0.08185
Sockets used: 22 (for perfect keepalive, would be 20)
Code 200 : 198 (99.0 %)
Code 503 : 2 (1.0 %)
All done 200 calls (plus 0 warmup) 23.688 ms avg, 631.3 qps
# 30 并发
$ kubectl exec fortio -c fortio /usr/local/bin/fortio -- load -c 30 -qps 0 -n 300 -loglevel Error http://service-go/env
10:25:49 I logger.go:97> Log level is now 4 Error (was 2 Info)
Fortio 1.0.1 running at 0 queries per second, 2->2 procs, for 300 calls: http://service-go/env
Aggregated Function Time : count 300 avg 0.055940327 +/- 0.04215 min 0.001836339 max 0.207798702 sum 16.782098
# target 50% 0.0394737
# target 75% 0.0776471
# target 90% 0.123333
# target 99% 0.18
# target 99.9% 0.205459
Sockets used: 94 (for perfect keepalive, would be 30)
Code 200 : 236 (78.7 %)
Code 503 : 64 (21.3 %)
All done 300 calls (plus 0 warmup) 55.940 ms avg, 486.3 qps
# 40 并发
$ kubectl exec fortio -c fortio /usr/local/bin/fortio -- load -c 40 -qps 0 -n 400 -loglevel Error http://service-go/env
10:26:17 I logger.go:97> Log level is now 4 Error (was 2 Info)
Fortio 1.0.1 running at 0 queries per second, 2->2 procs, for 400 calls: http://service-go/env
Aggregated Function Time : count 400 avg 0.034048003 +/- 0.02541 min 0.001808212 max 0.144268023 sum 13.6192011
# target 50% 0.028587
# target 75% 0.0415789
# target 90% 0.0588889
# target 99% 0.132
# target 99.9% 0.143414
Sockets used: 203 (for perfect keepalive, would be 40)
Code 200 : 225 (56.2 %)
Code 503 : 175 (43.8 %)
All done 400 calls (plus 0 warmup) 34.048 ms avg, 951.0 qps
# 查看 istio-proxy 状态
$ kubectl exec fortio -c istio-proxy -- curl -s localhost:15000/stats | grep service-go | grep pending
cluster.outbound|80|v1|service-go.default.svc.cluster.local.upstream_rq_pending_active: 0
cluster.outbound|80|v1|service-go.default.svc.cluster.local.upstream_rq_pending_failure_eject: 0
cluster.outbound|80|v1|service-go.default.svc.cluster.local.upstream_rq_pending_overflow: 0
cluster.outbound|80|v1|service-go.default.svc.cluster.local.upstream_rq_pending_total: 0
cluster.outbound|80|v2|service-go.default.svc.cluster.local.upstream_rq_pending_active: 0
cluster.outbound|80|v2|service-go.default.svc.cluster.local.upstream_rq_pending_failure_eject: 0
cluster.outbound|80|v2|service-go.default.svc.cluster.local.upstream_rq_pending_overflow: 0
cluster.outbound|80|v2|service-go.default.svc.cluster.local.upstream_rq_pending_total: 0
cluster.outbound|80||service-go.default.svc.cluster.local.upstream_rq_pending_active: 0cluster.outbound|80||service-go.default.svc.cluster.local.upstream_rq_pending_failure_eject: 0
cluster.outbound|80||service-go.default.svc.cluster.local.upstream_rq_pending_overflow: 551
cluster.outbound|80||service-go.default.svc.cluster.local.upstream_rq_pending_total: 1282
$ kubectl delete -f kubernetes/fortio.yaml
$ kubectl delete -f istio/route/virtual-service-go.yaml
$ kubectl delete -f istio/resilience/destination-rule-go-cb.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: service-node
spec:
hosts:
- service-node
http:
- route:
- destination:
host: service-node
timeout: 500ms
$ kubectl apply -f service/node/service-node.yaml
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
service-go-v1-7cc5c6f574-lrp2h 2/2 Running 0 4m
service-go-v2-7656dcc478-svn5c 2/2 Running 0 4m
service-node-v1-d44b9bf7b-ppn26 2/2 Running 0 24s
service-node-v2-86545d9796-rgmb7 2/2 Running 0 24s
$ kubectl apply -f kubernetes/fortio.yaml
$ kubectl apply -f istio/resilience/virtual-service-node-timeout.yaml
$ kubectl exec fortio -c fortio /usr/local/bin/fortio -- load -curl http://service-node/env
HTTP/1.1 200 OK
content-type: application/json; charset=utf-8
content-length: 77
date: Wed, 16 Jan 2019 10:33:57 GMT
x-envoy-upstream-service-time: 18
server: envoy
{"message":"node v1","upstream":[{"message":"go v1","response_time":"0.01"}]}
# 10 并发
$ kubectl exec fortio -c fortio /usr/local/bin/fortio -- load -c 10 -qps 0 -n 100 -loglevel Error http://service-node/env
11:08:24 I logger.go:97> Log level is now 4 Error (was 2 Info)
Fortio 1.0.1 running at 0 queries per second, 2->2 procs, for 100 calls: http://service-node/env
Aggregated Function Time : count 100 avg 0.19270902 +/- 0.1403 min 0.009657651 max 0.506141264 sum 19.2709017
# target 50% 0.173333
# target 75% 0.3
# target 90% 0.421429
# target 99% 0.505118
# target 99.9% 0.506039
Sockets used: 15 (for perfect keepalive, would be 10)
Code 200 : 94 (94.0 %)
Code 504 : 6 (6.0 %)
All done 100 calls (plus 0 warmup) 192.709 ms avg, 45.4 qps
# 20 并发
$ kubectl exec fortio -c fortio /usr/local/bin/fortio -- load -c 20 -qps 0 -n 200 -loglevel Error http://service-node/env
11:08:47 I logger.go:97> Log level is now 4 Error (was 2 Info)
Fortio 1.0.1 running at 0 queries per second, 2->2 procs, for 200 calls: http://service-node/env
Aggregated Function Time : count 200 avg 0.44961158 +/- 0.122 min 0.006904922 max 0.524347684 sum 89.9223153
# target 50% 0.50864
# target 75% 0.516494
# target 90% 0.521206
# target 99% 0.524034
# target 99.9% 0.524316
Sockets used: 163 (for perfect keepalive, would be 20)
Code 200 : 46 (23.0 %)
Code 504 : 154 (77.0 %)
All done 200 calls (plus 0 warmup) 449.612 ms avg, 39.2 qps
$ kubectl delete -f kubernetes/fortio.yaml
$ kubectl delete -f service/node/service-node.yaml
$ kubectl delete -f istio/resilience/virtual-service-node-timeout.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: service-node
spec:
hosts:
- service-node
http:
- route:
- destination:
host: service-node
retries:
attempts: 3
perTryTimeout: 2s
$ kubectl apply -f kubernetes/fortio.yaml
$ kubectl apply -f kubernetes/httpbin.yaml
$ kubectl get pod -l app=httpbin
NAME READY STATUS RESTARTS AGE
httpbin-b67975b8f-vmbtv 2/2 Running 0 49s
$ kubectl apply -f istio/route/virtual-service-httpbin.yaml
$ kubectl exec fortio -c fortio /usr/local/bin/fortio -- load -curl http://httpbin:8000/status/200
HTTP/1.1 200 OK
server: envoy
date: Wed, 16 Jan 2019 14:03:00 GMT
content-type: text/html; charset=utf-8
access-control-allow-origin: *
access-control-allow-credentials: true
content-length: 0
x-envoy-upstream-service-time: 33
$ kubectl exec fortio -c fortio /usr/local/bin/fortio -- load -c 10 -qps 0 -n 100 -loglevel Error http://httpbin:8000/status/200%2C200%2C200%2C200%2C500
14:18:37 I logger.go:97> Log level is now 4 Error (was 2 Info)
Fortio 1.0.1 running at 0 queries per second, 2->2 procs, for 100 calls: http://httpbin:8000/status/200%2C200%2C200%2C200%2C500
Aggregated Function Time : count 100 avg 0.24802899 +/- 0.06426 min 0.016759858 max 0.390472066 sum 24.8028985
# target 50% 0.252941
# target 75% 0.289706
# target 90% 0.326667
# target 99% 0.376981
# target 99.9% 0.389123
Sockets used: 30 (for perfect keepalive, would be 10)
Code 200 : 78 (78.0 %)
Code 500 : 22 (22.0 %)
All done 100 calls (plus 0 warmup) 248.029 ms avg, 38.5 qps
$ kubectl apply -f istio/resilience/virtual-service-httpbin-retry.yaml
$ kubectl exec fortio -c fortio /usr/local/bin/fortio -- load -curl http://httpbin:8000/status/200
HTTP/1.1 200 OK
server: envoy
date: Wed, 16 Jan 2019 14:19:14 GMT
content-type: text/html; charset=utf-8
access-control-allow-origin: *
access-control-allow-credentials: true
content-length: 0
x-envoy-upstream-service-time: 5
$ kubectl exec fortio -c fortio /usr/local/bin/fortio -- load -c 10 -qps 0 -n 100 -loglevel Error http://httpbin:8000/status/200%2C200%2C200%2C200%2C500
14:19:32 I logger.go:97> Log level is now 4 Error (was 2 Info)
Fortio 1.0.1 running at 0 queries per second, 2->2 procs, for 100 calls: http://httpbin:8000/status/200%2C200%2C200%2C200%2C500
Aggregated Function Time : count 100 avg 0.23708609 +/- 0.1323 min 0.017537636 max 0.793965189 sum 23.7086086
# target 50% 0.226471
# target 75% 0.275
# target 90% 0.383333
# target 99% 0.7
# target 99.9% 0.784569
Sockets used: 13 (for perfect keepalive, would be 10)
Code 200 : 97 (97.0 %)
Code 500 : 3 (3.0 %)
All done 100 calls (plus 0 warmup) 237.086 ms avg, 35.5 qps
$ kubectl delete -f kubernetes/fortio.yaml
$ kubectl delete -f kubernetes/httpbin.yaml
$ kubectl delete -f istio/resilience/virtual-service-httpbin-retry.yaml
QuotaSpec 定义了 quota 实例名称和对应的每次请求消耗的配额数。
QuotaSpecBinding 将 QuotaSpec 与一个或多个服务相关联绑定,只有被关联绑定的服务限流才会生效。
quota 实例定义了 Mixer 如何区别度量一个请求的限流配额,用来描述请求数据收集的维度。
memquota/redisquota 适配器定义了 memquota/redisquota 的配置,根据 quota 实例定义的请求数据收集维度来区分并定义一个或多个限流配额数量。
rule 规则定义了 quota 实例应该何时分发给 memquota/redisquota 适配器处理。
apiVersion: "config.istio.io/v1alpha2"
kind: quota
metadata:
name: requestcount
namespace: istio-system
spec:
dimensions:
source: request.headers["x-forwarded-for"] | "unknown"
destination: destination.labels["app"] | destination.service.name | "unknown"
destinationVersion: destination.labels["version"] | "unknown"---
apiVersion: "config.istio.io/v1alpha2"
kind: memquota
metadata:
name: handler
namespace: istio-system
spec:
quotas:
- name: requestcount.quota.istio-system
maxAmount: 500
validDuration: 1s
overrides:
- dimensions:
destination: service-go
maxAmount: 50
validDuration: 1s
- dimensions:
destination: service-node
source: "10.28.11.20"
maxAmount: 50
validDuration: 1s
- dimensions:
destination: service-node
maxAmount: 20
validDuration: 1s
- dimensions:
destination: service-python
maxAmount: 2
validDuration: 5s
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: quota
namespace: istio-system
spec:
actions:
- handler: handler.memquota
instances:
- requestcount.quota
---
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpec
metadata:
name: request-count
namespace: istio-system
spec:
rules:
- quotas:
- charge: 1
quota: requestcount
---
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpecBinding
metadata:
name: request-count
namespace: istio-system
spec:
quotaSpecs:
- name: request-count
namespace: istio-system
services:
- name: service-go
namespace: default
- name: service-node
namespace: default
- name: service-python
namespace: default
source 获取请求的 x-forwarded-for 请求头的值作为 source 的取值,不存在时 source 取值 "unknown"。
destination 获取请求的目标服务标签中的 app 标签的值,不存在时取目标服务的 service.name 字段的值,否则 destination 取值 "unknown"。
destinationVersion 获取请求目标服务标签中的 version 标签的值,不存在时 destinationVersion 取值 "unknown"。
apiVersion: "config.istio.io/v1alpha2"
kind: quota
metadata:
name: requestcount
namespace: istio-system
spec:
dimensions:
source: request.headers["x-forwarded-for"] | "unknown"
destination: destination.labels["app"] | destination.workload.name | "unknown"
destinationVersion: destination.labels["version"] | "unknown"
---
apiVersion: "config.istio.io/v1alpha2"
kind: redisquota
metadata:
name: handler
namespace: istio-system
spec:
redisServerUrl: redis-ratelimit.istio-system:6379
connectionPoolSize: 10
quotas:
- name: requestcount.quota.istio-system
maxAmount: 500
validDuration: 1s
bucketDuration: 500ms
rateLimitAlgorithm: ROLLING_WINDOW
overrides:
- dimensions:
destination: service-go
maxAmount: 50
- dimensions:
destination: service-node
source: "10.28.11.20"
maxAmount: 50
- dimensions:
destination: service-node
maxAmount: 20
- dimensions:
destination: service-python
maxAmount: 2
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: quota
namespace: istio-system
spec:
actions:
- handler: handler.redisquota
instances:
- requestcount.quota
---
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpec
metadata:
name: request-count
namespace: istio-system
spec:
rules:
- quotas:
- charge: 1
quota: requestcount
---
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpecBinding
metadata:
name: request-count
namespace: istio-system
spec:
quotaSpecs:
- name: request-count
namespace: istio-system
services:
- name: service-go
namespace: default
- name: service-node
namespace: default
- name: service-python
namespace: default
FIXED_WINDOW 算法可以允许2倍的设置的请求速率峰值。
ROLLING_WINDOW 算法可以提高更高的精确度,这也会额外消耗 Redis 的资源。
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: quota
namespace: istio-system
spec:
match: match(request.headers["cookie"], "user=*") == false
actions:
- handler: handler.memquota
instances:
- requestcount.quota
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpecBinding
metadata:
name: request-count
namespace: istio-system
spec:
quotaSpecs:
- name: request-count
namespace: istio-system
services:
- service: '*'
$ kubectl apply -f service/node/service-node.yaml
$ kubectl apply -f service/lua/service-lua.yaml
$ kubectl apply -f service/python/service-python.yaml
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
service-go-v1-7cc5c6f574-488rs 2/2 Running 0 15m
service-go-v2-7656dcc478-bfq5x 2/2 Running 0 15m
service-lua-v1-5c9bcb7778-d7qwp 2/2 Running 0 3m12s
service-lua-v2-75cb5cdf8-g9vht 2/2 Running 0 3m12s
service-node-v1-d44b9bf7b-z7vbr 2/2 Running 0 3m11s
service-node-v2-86545d9796-rgtxw 2/2 Running 0 3m10s
service-python-v1-79fc5849fd-xgfkn 2/2 Running 0 3m9s
service-python-v2-7b6864b96b-5w6cj 2/2 Running 0 3m15s
$ kubectl apply -f kubernetes/fortio.yaml
$ kubectl apply -f istio/resilience/quota-mem-ratelimit.yaml
$ kubectl exec fortio -c fortio /usr/local/bin/fortio -- load -curl http://service-go/env
HTTP/1.1 200 OK
content-type: application/json; charset=utf-8
date: Wed, 16 Jan 2019 15:33:02 GMT
content-length: 19
x-envoy-upstream-service-time: 226
server: envoy
{"message":"go v1"}
# 30 qps
$ kubectl exec fortio -c fortio /usr/local/bin/fortio -- load -qps 30 -n 300 -loglevel Error http://service-go/env
15:33:36 I logger.go:97> Log level is now 4 Error (was 2 Info)
Fortio 1.0.1 running at 30 queries per second, 2->2 procs, for 300 calls: http://service-go/env
Aggregated Function Time : count 300 avg 0.0086544419 +/- 0.005944 min 0.002929143 max 0.065596074 sum 2.59633258
# target 50% 0.007375
# target 75% 0.00938095
# target 90% 0.0115
# target 99% 0.0325
# target 99.9% 0.0647567
Sockets used: 4 (for perfect keepalive, would be 4)
Code 200 : 300 (100.0 %)
All done 300 calls (plus 0 warmup) 8.654 ms avg, 30.0 qps
# 50 qps
$ kubectl exec fortio -c fortio /usr/local/bin/fortio -- load -qps 50 -n 500 -loglevel Error http://service-go/env
15:34:17 I logger.go:97> Log level is now 4 Error (was 2 Info)
Fortio 1.0.1 running at 50 queries per second, 2->2 procs, for 500 calls: http://service-go/env
Aggregated Function Time : count 500 avg 0.0086848862 +/- 0.005076 min 0.00307391 max 0.05419281 sum 4.34244311
# target 50% 0.0075
# target 75% 0.00959459
# target 90% 0.0132857
# target 99% 0.03
# target 99.9% 0.0531446
Sockets used: 4 (for perfect keepalive, would be 4)
Code 200 : 500 (100.0 %)
All done 500 calls (plus 0 warmup) 8.685 ms avg, 50.0 qps
# 60 qps
$ kubectl exec fortio -c fortio /usr/local/bin/fortio -- load -qps 60 -n 600 -loglevel Error http://service-go/env
15:35:28 I logger.go:97> Log level is now 4 Error (was 2 Info)
Fortio 1.0.1 running at 60 queries per second, 2->2 procs, for 600 calls: http://service-go/env
Aggregated Function Time : count 600 avg 0.0090870522 +/- 0.008314 min 0.002537502 max 0.169680378 sum 5.45223134
# target 50% 0.00748529
# target 75% 0.0101538
# target 90% 0.0153548
# target 99% 0.029375
# target 99.9% 0.163872
Sockets used: 23 (for perfect keepalive, would be 4)
Code 200 : 580 (96.7 %)
Code 429 : 20 (3.3 %)
All done 600 calls (plus 0 warmup) 9.087 ms avg, 59.9 qps
$ kubectl exec fortio -c fortio /usr/local/bin/fortio -- load -curl http://service-node/env
HTTP/1.1 200 OK
content-type: application/json; charset=utf-8
content-length: 77
date: Wed, 16 Jan 2019 15:36:13 GMT
x-envoy-upstream-service-time: 1187
server: envoy
{"message":"node v2","upstream":[{"message":"go v1","response_time":"0.51"}]}
# 20 qps
$ kubectl exec fortio -c fortio /usr/local/bin/fortio -- load -qps 20 -n 200 -loglevel Error http://service-node/env
15:37:51 I logger.go:97> Log level is now 4 Error (was 2 Info)
Fortio 1.0.1 running at 20 queries per second, 2->2 procs, for 200 calls: http://service-node/env
Aggregated Sleep Time : count 196 avg -0.21285915 +/- 1.055 min -4.8433788589999995 max 0.190438028 sum -41.7203939
# range, mid point, percentile, count
>= -4.84338 <= -0.001 , -2.42219 , 18.37, 36
> 0.003 <= 0.004 , 0.0035 , 20.41, 4
> 0.011 <= 0.013 , 0.012 , 20.92, 1
> 0.015 <= 0.017 , 0.016 , 21.43, 1
> 0.069 <= 0.079 , 0.074 , 21.94, 1
> 0.089 <= 0.099 , 0.094 , 24.49, 5
> 0.099 <= 0.119 , 0.109 , 28.57, 8
> 0.119 <= 0.139 , 0.129 , 33.67, 10
> 0.139 <= 0.159 , 0.149 , 38.27, 9
> 0.159 <= 0.179 , 0.169 , 68.37, 59
> 0.179 <= 0.190438 , 0.184719 , 100.00, 62
# target 50% 0.166797
WARNING 18.37% of sleep were falling behind
Aggregated Function Time : count 200 avg 0.07655831 +/- 0.3601 min 0.007514854 max 5.046878744 sum 15.311662
# target 50% 0.0258696
# target 75% 0.045
# target 90% 0.104
# target 99% 0.55
# target 99.9% 5.0375
Sockets used: 4 (for perfect keepalive, would be 4)
Code 200 : 200 (100.0 %)
All done 200 calls (plus 0 warmup) 76.558 ms avg, 18.1 qps
# 30 qps
$ kubectl exec fortio -c fortio /usr/local/bin/fortio -- load -qps 30 -n 300 -loglevel Error http://service-node/env
15:38:36 I logger.go:97> Log level is now 4 Error (was 2 Info)
Fortio 1.0.1 running at 30 queries per second, 2->2 procs, for 300 calls: http://service-node/env
Aggregated Sleep Time : count 296 avg 0.035638851 +/- 0.1206 min -0.420611573 max 0.132597685 sum 10.5491
# range, mid point, percentile, count
>= -0.420612 <= -0.001 , -0.210806 , 24.66, 73
> -0.001 <= 0 , -0.0005 , 25.00, 1
...
# target 50% 0.0934
WARNING 24.66% of sleep were falling behind
Aggregated Function Time : count 300 avg 0.06131494 +/- 0.08193 min 0.001977589 max 0.42055696 sum 18.3944819
# target 50% 0.03
# target 75% 0.0628571
# target 90% 0.175
# target 99% 0.4
# target 99.9% 0.418501
Sockets used: 55 (for perfect keepalive, would be 4)
Code 200 : 249 (83.0 %)
Code 429 : 51 (17.0 %)
All done 300 calls (plus 0 warmup) 61.315 ms avg, 29.9 qps
# 30 qps
$ kubectl exec fortio -c fortio /usr/local/bin/fortio -- load -qps 30 -n 300 -loglevel Error -H "x-forwarded-for: 10.28.11.20" http://service-node/env
15:40:34 I logger.go:97> Log level is now 4 Error (was 2 Info)
Fortio 1.0.1 running at 30 queries per second, 2->2 procs, for 300 calls: http://service-node/env
Aggregated Sleep Time : count 296 avg -1.4901022 +/- 1.952 min -6.08576837 max 0.123485559 sum -441.070241
# range, mid point, percentile, count
>= -6.08577 <= -0.001 , -3.04338 , 69.59, 206
...
# target 50% -1.72254
WARNING 69.59% of sleep were falling behind
Aggregated Function Time : count 300 avg 0.1177745 +/- 0.4236 min 0.008494289 max 5.14910151 sum 35.332351
# target 50% 0.0346875
# target 75% 0.0985714
# target 90% 0.25
# target 99% 0.55
# target 99.9% 5.12674
Sockets used: 4 (for perfect keepalive, would be 4)
Code 200 : 300 (100.0 %)
All done 300 calls (plus 0 warmup) 117.775 ms avg, 24.7 qps
# 50 qps
$ kubectl exec fortio -c fortio /usr/local/bin/fortio -- load -qps 50 -n 500 -loglevel Error -H "x-forwarded-for: 10.28.11.20" http://service-node/env
15:45:31 I logger.go:97> Log level is now 4 Error (was 2 Info)
Fortio 1.0.1 running at 50 queries per second, 2->2 procs, for 500 calls: http://service-node/env
Aggregated Sleep Time : count 496 avg 0.0015264793 +/- 0.1077 min -0.382731569 max 0.078526418 sum 0.757133711
# range, mid point, percentile, count
>= -0.382732 <= -0.001 , -0.191866 , 25.40, 126
> -0.001 <= 0 , -0.0005 , 25.60, 1
...
> 0.069 <= 0.0785264 , 0.0737632 , 100.00, 34
# target 50% 0.0566056
WARNING 25.40% of sleep were falling behind
Aggregated Function Time : count 500 avg 0.039103632 +/- 0.05723 min 0.001972061 max 0.450959277 sum 19.5518159
target 50% 0.0175385
# target 75% 0.0323529
# target 90% 0.0975
# target 99% 0.3
# target 99.9% 0.450719
Sockets used: 7 (for perfect keepalive, would be 4)
Code 200 : 497 (99.4 %)
Code 429 : 3 (0.6 %)
All done 500 calls (plus 0 warmup) 39.104 ms avg, 48.4 qps
# 60 qps
$ kubectl exec fortio -c fortio /usr/local/bin/fortio -- load -qps 60 -n 600 -loglevel Error -H "x-forwarded-for: 10.28.11.20" http://service-node/env
15:50:24 I logger.go:97> Log level is now 4 Error (was 2 Info)
Fortio 1.0.1 running at 60 queries per second, 2->2 procs, for 600 calls: http://service-node/env
Aggregated Sleep Time : count 596 avg -0.081667759 +/- 0.1592 min -0.626635518 max 0.064876123 sum -48.6739846
# range, mid point, percentile, count
>= -0.626636 <= -0.001 , -0.313818 , 51.01, 304
> 0 <= 0.001 , 0.0005 , 51.34, 2
...
> 0.059 <= 0.0648761 , 0.0619381 , 100.00, 14
# target 50% -0.0133888
WARNING 51.01% of sleep were falling behind
Aggregated Function Time : count 600 avg 0.04532505 +/- 0.04985 min 0.001904423 max 0.304644243 sum 27.1950299
# target 50% 0.0208163
# target 75% 0.07
# target 90% 0.1025
# target 99% 0.233333
# target 99.9% 0.303251
Sockets used: 19 (for perfect keepalive, would be 4)
Code 200 : 585 (97.5 %)
Code 429 : 15 (2.5 %)
All done 600 calls (plus 0 warmup) 45.325 ms avg, 59.9 qps
$ kubectl exec fortio -c fortio /usr/local/bin/fortio -- load -curl http://service-python/env
HTTP/1.1 200 OK
content-type: application/json
content-length: 178
server: envoy
date: Wed, 16 Jan 2019 15:47:30 GMT
x-envoy-upstream-service-time: 366
{"message":"python v2","upstream":[{"message":"lua v2","response_time":0.19},{"message":"node v2","response_time":0.18,"upstream":[{"message":"go v1","response_time":"0.02"}]}]}
$ kubectl exec fortio -c fortio /usr/local/bin/fortio -- load -qps 1 -n 10 -loglevel Error http://service-python/env
15:48:02 I logger.go:97> Log level is now 4 Error (was 2 Info)
Fortio 1.0.1 running at 1 queries per second, 2->2 procs, for 10 calls: http://service-python/env
Aggregated Function Time : count 10 avg 0.45553668 +/- 0.5547 min 0.003725253 max 1.4107851249999999 sum 4.55536678
# target 50% 0.18
# target 75% 1.06846
# target 90% 1.27386
# target 99% 1.39709
# target 99.9% 1.40942
Sockets used: 6 (for perfect keepalive, would be 4)
Code 200 : 5 (50.0 %)
Code 429 : 5 (50.0 %)
All done 10 calls (plus 0 warmup) 455.537 ms avg, 0.6 qps
Istio 通过 quota 实现限流,但是限流并不是完全准确的,可能会存在部分误差,使用时需要注意。
$ kubectl delete -f kubernetes/fortio.yaml
$ kubectl delete -f istio/resilience/quota-mem-ratelimit.yaml
$ kubectl delete -f service/node/service-node.yaml
$ kubectl delete -f service/lua/service-lua.yaml
$ kubectl delete -f service/python/service-python.yaml
以上是关于借助 Istio 让服务更具弹性 | 周末送福利的主要内容,如果未能解决你的问题,请参考以下文章
银行数据中心自动化运维设计实施及Ansible应用 | 周末送资料