2020 新春流行的RPC框架性能大比拼

Posted Go语言中文网

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了2020 新春流行的RPC框架性能大比拼相关的知识,希望对你有一定的参考价值。

本质上讲,鸿雁传书也是一种RPC调用,只不过速度比较慢,可靠性不是那么高,现在RPC远程方法调用一般直接使用TCP或者HTTP实现。HTTP的服务暴露方式比较简单,可以采用RESTful的方式提供通用的API, 客户端的调用也比较简单。直接TCP上实现的RPC远程方法调用性能优良,可以用在高吞吐低延迟的场景上。

近日江湖百晓生兄提供了一份最新的2020 RPC框架性能比较文档,我整理一下发表在这里,相关的测试代码可以在rpcx-benchmark[1]找到。

历史benchmark比较:

  • 流行的rpc框架benchmark 2018新春版 [2]
  • 分布式RPC框架性能大比拼 [3]

当然,作为一份榜单,必然会有争议,争议无外乎以下几种:

  • 利益相关者争议:因为我是rpcx作者,会不会特意给rpcx开小灶。答案是没有,因为测试代码明明白白的开源出来了,可以供大家review
  • 测试不全面:这是对的,没有一份测试满足所有的测试场景。有些场景是CPU敏感的、有些场景是IO敏感的,有些是内存敏感的业务,测试不太可能测试的各个方面,所以这一份测试只是针对一个场景的测试:通过序列化/反序列化一个比较大的protobuf,来模拟业务的处理。(代码中有delay参数,可以用来模拟耗时较长的服务,但是看这次的测试结果,这个场景没有覆盖。CPU耗时大家可扩充以下,通过计算阶乘或者挖矿的方式模拟CPU的消耗)
  • 测试结果不全面:这次测试只采集了并发数和耗时(latency), 并没有采集服务器和客户端的CPU/内存占用等指标。

这次测试的只是框架的性能,在评估一个框架的时候,还需要考虑框架的功能、易用性、活跃度等各个方面。

以上是测试背景。

测试环境

  • CPU: Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz,两颗, 共20个物理core, 开超线程 40个逻辑core
  • 内存: 32G
  • 磁盘: SSD

测试数据

payload

所有框架传输的数据都是在客户端序列化好的protobuf对象,并使用一些测试数据填充这个对象。序列化后的payload大小是581个字节,每个框架都会增加一些额外的数据,大小不等。收到的数据类型,并且需要反序列化。

服务端收到这个数据后会反序列化,设置几个字段后再序列化返回客户端。

可以看出业务处理主要是对象的序列化和反序列化,这些操作对每一个框架都是一样的。

syntax = "proto2";

package proto;

option optimize_for = SPEED;

message BenchmarkMessage {
required string field1 = 1;
optional string field9 = 9;
optional string field18 = 18;
optional bool field80 = 80 [default=false];
optional bool field81 = 81 [default=true];
required int32 field2 = 2;
required int32 field3 = 3;
optional int32 field280 = 280;
optional int32 field6 = 6 [default=0];
optional int64 field22 = 22;
optional string field4 = 4;
repeated fixed64 field5 = 5;
optional bool field59 = 59 [default=false];
optional string field7 = 7;
optional int32 field16 = 16;
optional int32 field130 = 130 [default=0];
optional bool field12 = 12 [default=true];
optional bool field17 = 17 [default=true];
optional bool field13 = 13 [default=true];
optional bool field14 = 14 [default=true];
optional int32 field104 = 104 [default=0];
optional int32 field100 = 100 [default=0];
optional int32 field101 = 101 [default=0];
optional string field102 = 102;
optional string field103 = 103;
optional int32 field29 = 29 [default=0];
optional bool field30 = 30 [default=false];
optional int32 field60 = 60 [default=-1];
optional int32 field271 = 271 [default=-1];
optional int32 field272 = 272 [default=-1];
optional int32 field150 = 150;
optional int32 field23 = 23 [default=0];
optional bool field24 = 24 [default=false];
optional int32 field25 = 25 [default=0];
optional bool field78 = 78;
optional int32 field67 = 67 [default=0];
optional int32 field68 = 68;
optional int32 field128 = 128 [default=0];
optional string field129 = 129 [default="xxxxxxxxxxxxxxxxxxxxx"];
optional int32 field131 = 131 [default=0];
}

测试结果

吞吐率比较

耗时比较

完整的测试数据

dubbo

服务端:

ulimit -n 1000000
nohup java -jar dubbo-bench-provider-2.7.5.jar zookeeper://10.222.77.227:2181 &

客户端:

ulimit -n 1000000
nohup java -jar dubbo-bench-consumer-2.7.5.jar 100 10000000 zookeeper://10.222.77.227:2181 > result.log 2>&1 &

c=100, n=10000000

throughput  (TPS)    : 10590
mean: 1.323284
median: 1.000000
max: 3105.000000
min: 0.000000
99P: 2.000000

c=1000, n=10000000

throughput  (TPS)    : 9577
mean: 14.678306
median: 12.000000
max: 3249.000000
min: 0.000000
99P: 14.000000

go 标准库

c=100,n=10000000

took 64733.274973 ms for 10000000 requests
2020/01/18 15:27:09 sent requests : 10000000
2020/01/18 15:27:09 received requests : 10000000
2020/01/18 15:27:09 received requests_OK : 10000000
2020/01/18 15:27:09 throughput (TPS) : 154480
2020/01/18 15:27:09 mean: 611249 ns, median: 511101 ns, max: 8589768 ns, min: 75924 ns, p99.9: 2464773 ns
2020/01/18 15:27:09 mean: 0 ms, median: 0 ms, max: 8 ms, min: 0 ms, p99: 2 ms

c=1000,n=10000000

 took 55479.341019 ms for 10000000 requests
2020/01/18 15:30:50 sent requests : 10000000
2020/01/18 15:30:50 received requests : 10000000
2020/01/18 15:30:50 received requests_OK : 10000000
2020/01/18 15:30:50 throughput (TPS) : 180247
2020/01/18 15:30:50 mean: 5206044 ns, median: 5163272 ns, max: 29196701 ns, min: 86510 ns, p99.9: 13032492 ns
2020/01/18 15:30:50 mean: 5 ms, median: 5 ms, max: 29 ms, min: 0 ms, p99: 13 ms

grpc

c=100,n=10000000

took 91803 ms for 10000000 requests
2020/01/18 15:45:00 grpc_mclient.go:109: INFO : sent requests : 10000000
2020/01/18 15:45:00 grpc_mclient.go:110: INFO : received requests : 10000000
2020/01/18 15:45:00 grpc_mclient.go:111: INFO : received requests_OK : 10000000
2020/01/18 15:45:00 grpc_mclient.go:112: INFO : throughput (TPS) : 108928
2020/01/18 15:45:00 grpc_mclient.go:113: INFO : mean: 857796 ns, median: 672012 ns, max: 43451467 ns, min: 125892 ns, p99: 6719222 ns
2020/01/18 15:45:00 grpc_mclient.go:114: INFO : mean: 0 ms, median: 0 ms, max: 43 ms, min: 0 ms, p99: 6 ms

c=1000,n=10000000

took 75736 ms for 10000000 requests
2020/01/18 15:46:44 grpc_mclient.go:109: INFO : sent requests : 10000000
2020/01/18 15:46:44 grpc_mclient.go:110: INFO : received requests : 10000000
2020/01/18 15:46:44 grpc_mclient.go:111: INFO : received requests_OK : 10000000
2020/01/18 15:46:44 grpc_mclient.go:112: INFO : throughput (TPS) : 132037
2020/01/18 15:46:44 grpc_mclient.go:113: INFO : mean: 7148157 ns, median: 5975982 ns, max: 133300722 ns, min: 137300 ns, p99: 58021151 ns
2020/01/18 15:46:44 grpc_mclient.go:114: INFO : mean: 7 ms, median: 5 ms, max: 133 ms, min: 0 ms, p99: 58 ms

rpcx

c=100,n=10000000

took 99082 ms for 10000000 requests
2020/01/18 16:02:14 rpcx_mclient.go:121: INFO : sent requests : 10000000
2020/01/18 16:02:14 rpcx_mclient.go:122: INFO : received requests : 10000000
2020/01/18 16:02:14 rpcx_mclient.go:123: INFO : received requests_OK : 10000000
2020/01/18 16:02:14 rpcx_mclient.go:124: INFO : throughput (TPS) : 100926
2020/01/18 16:02:14 rpcx_mclient.go:125: INFO : mean: 954794 ns, median: 776113 ns, max: 13546797 ns, min: 80234 ns, p99: 4047876 ns
2020/01/18 16:02:14 rpcx_mclient.go:126: INFO : mean: 0 ms, median: 0 ms, max: 13 ms, min: 0 ms, p99: 4 ms

c=1000,n=10000000

took 58275 ms for 10000000 requests
2020/01/18 16:03:45 rpcx_mclient.go:121: INFO : sent requests : 10000000
2020/01/18 16:03:45 rpcx_mclient.go:122: INFO : received requests : 10000000
2020/01/18 16:03:45 rpcx_mclient.go:123: INFO : received requests_OK : 10000000
2020/01/18 16:03:45 rpcx_mclient.go:124: INFO : throughput (TPS) : 171600
2020/01/18 16:03:45 rpcx_mclient.go:125: INFO : mean: 5474568 ns, median: 5369917 ns, max: 33885454 ns, min: 78216 ns, p99: 16806552 ns
2020/01/18 16:03:45 rpcx_mclient.go:126: INFO : mean: 5 ms, median: 5 ms, max: 33 ms, min: 0 ms, p99: 16 ms

async-rpcx

c=100,n=10000000

took 54582 ms for 10000000 requests
2020/01/18 16:10:14 rpcx_mclient.go:145: INFO : sent requests : 10000000
2020/01/18 16:10:14 rpcx_mclient.go:146: INFO : received requests : 10000000
2020/01/18 16:10:14 rpcx_mclient.go:147: INFO : received requests_OK : 10000000
2020/01/18 16:10:14 rpcx_mclient.go:148: INFO : throughput (TPS) : 183210
2020/01/18 16:10:14 rpcx_mclient.go:149: INFO : mean: 235075061 ns, median: 238628422 ns, max: 713519584 ns, min: 1853214 ns, p99: 591043364 ns
2020/01/18 16:10:14 rpcx_mclient.go:150: INFO : mean: 235 ms, median: 238 ms, max: 713 ms, min: 1 ms, p99: 591 ms

c=1000,n=10000000

took 55263 ms for 10000000 requests
2020/01/18 16:07:21 rpcx_mclient.go:145: INFO : sent requests : 10000000
2020/01/18 16:07:21 rpcx_mclient.go:146: INFO : received requests : 10000000
2020/01/18 16:07:21 rpcx_mclient.go:147: INFO : received requests_OK : 10000000
2020/01/18 16:07:21 rpcx_mclient.go:148: INFO : throughput (TPS) : 180952
2020/01/18 16:07:21 rpcx_mclient.go:149: INFO : mean: 843985176 ns, median: 825286868 ns, max: 1645398520 ns, min: 15688776 ns, p99: 1532695207 ns
2020/01/18 16:07:21 rpcx_mclient.go:150: INFO : mean: 843 ms, median: 825 ms, max: 1645 ms, min: 15 ms, p99: 1532 ms

thrift

c=100,n=10000000

java -cp thrift-1.0-SNAPSHOT.jar com.colobu.thrift.AppClient 10.41.15.226 100 10000000
sent requests : 10000000
received requests : 10000000
received requests_OK : 10000000
throughput (TPS) : 18798
mean: 0.674689
median: 1.000000
max: 19.000000
min: 0.000000
99P: 4.000000

c=1000,n=10000000

java -cp thrift-1.0-SNAPSHOT.jar com.colobu.thrift.AppClient 10.41.15.226 1000 10000000
sent requests : 10000000
received requests : 10000000
received requests_OK : 10000000
throughput (TPS) : 19151
mean: 7.158192
median: 7.000000
max: 181.000000
min: 0.000000
99P: 19.000000

tarsgo

c=100,n=10000000

tarsgo_mclient.go:96: INFO : took 146657 ms for 10000000 requests
2020/01/18 16:40:37 tarsgo_mclient.go:113: INFO : sent requests : 10000000
2020/01/18 16:40:37 tarsgo_mclient.go:114: INFO : received requests : 10000000
2020/01/18 16:40:37 tarsgo_mclient.go:115: INFO : received requests_OK : 0
2020/01/18 16:40:37 tarsgo_mclient.go:116: INFO : throughput (TPS) : 68186
2020/01/18 16:40:37 tarsgo_mclient.go:117: INFO : mean: 1463218 ns, median: 897658 ns, max: 833883541 ns, min: 40970 ns, p99: 24386698 ns
2020/01/18 16:40:37 tarsgo_mclient.go:118: INFO : mean: 1 ms, median: 0 ms, max: 833 ms, min: 0 ms, p99: 24 ms

c=1000,n=10000000

./tars_client -config benchmark.conf
2020/01/18 16:42:19 tarsgo_mclient.go:42: INFO : concurrency: 1000
requests per client: 10000
2020/01/18 16:42:19 tarsgo_mclient.go:47: INFO : message size: 581 bytes
2020/01/18 16:44:33 tarsgo_mclient.go:96: INFO : took 133438 ms for 10000000 requests
2020/01/18 16:44:40 tarsgo_mclient.go:113: INFO : sent requests : 10000000
2020/01/18 16:44:40 tarsgo_mclient.go:114: INFO : received requests : 10000000
2020/01/18 16:44:40 tarsgo_mclient.go:115: INFO : received requests_OK : 0
2020/01/18 16:44:40 tarsgo_mclient.go:116: INFO : throughput (TPS) : 74941
2020/01/18 16:44:40 tarsgo_mclient.go:117: INFO : mean: 13189350 ns, median: 255591 ns, max: 1020977156 ns, min: 41305 ns, p99: 808153981 ns
2020/01/18 16:44:40 tarsgo_mclient.go:118: INFO : mean: 13 ms, median: 0 ms, max: 1020 ms, min: 0 ms, p99: 808 ms

hprose

c=100,n=10000000

info took 85780 ms for 10000000 requests
2020/01/18 16:53:47 info sent requests : 10000000
2020/01/18 16:53:47 info received requests : 10000000
2020/01/18 16:53:47 info received requests_OK : 10000000
2020/01/18 16:53:47 info throughput (TPS) : 116577
2020/01/18 16:53:47 info mean: 855396 ns, median: 611286 ns, max: 43307064 ns, min: 74772 ns, p99: 8799176 ns
2020/01/18 16:53:47 info mean: 0 ms, median: 0 ms, max: 43 ms, min: 0 ms, p99: 8 ms

c=1000,n=10000000

info took 55890 ms for 10000000 requests
2020/01/18 16:55:23 info sent requests : 10000000
2020/01/18 16:55:23 info received requests : 10000000
2020/01/18 16:55:23 info received requests_OK : 10000000
2020/01/18 16:55:23 info throughput (TPS) : 178922
2020/01/18 16:55:23 info mean: 5574522 ns, median: 5450258 ns, max: 112631418 ns, min: 86944 ns, p99: 15302910 ns
2020/01/18 16:55:23 info mean: 5 ms, median: 5 ms, max: 112 ms, min: 0 ms, p99: 15 ms

go-micro

c=100,n=10000000

./micro_client
2020/01/18 16:08:32 gomicro_client.go:43: INFO : 192.168.1.226:8972 1000000 100
2020/01/18 16:08:32 gomicro_client.go:47: INFO : concurrency: 100
requests per client: 10000

2020/01/18 16:08:32 gomicro_client.go:52: INFO : message size: 581 bytes

2020/01/18 16:10:30 gomicro_client.go:102: INFO : took 117938 ms for 1000000 requests
2020/01/18 16:10:31 gomicro_client.go:119: INFO : sent requests : 1000000
2020/01/18 16:10:31 gomicro_client.go:120: INFO : received requests : 1000000
2020/01/18 16:10:31 gomicro_client.go:121: INFO : received requests_OK : 0
2020/01/18 16:10:31 gomicro_client.go:122: INFO : throughput (TPS) : 8479
2020/01/18 16:10:31 gomicro_client.go:123: INFO : mean: 11462696 ns, median: 10945519 ns, max: 1015949740 ns, min: 10513193 ns, p99: 17612504 ns
2020/01/18 16:10:31 gomicro_client.go:124: INFO : mean: 11 ms, median: 10 ms, max: 1015 ms, min: 10 ms, p99: 17 ms

c=1000,n=10000000

2020/01/18 16:11:09 gomicro_client.go:43: INFO : 192.168.1.226:8972 1000000 1000
2020/01/18 16:11:09 gomicro_client.go:47: INFO : concurrency: 1000
requests per client: 1000

2020/01/18 16:11:09 gomicro_client.go:52: INFO : message size: 581 bytes

2020/01/18 16:12:01 gomicro_client.go:102: INFO : took 51922 ms for 1000000 requests
2020/01/18 16:12:01 gomicro_client.go:119: INFO : sent requests : 1000000
2020/01/18 16:12:01 gomicro_client.go:120: INFO : received requests : 1000000
2020/01/18 16:12:01 gomicro_client.go:121: INFO : received requests_OK : 0
2020/01/18 16:12:01 gomicro_client.go:122: INFO : throughput (TPS) : 19259
2020/01/18 16:12:01 gomicro_client.go:123: INFO : mean: 46115145 ns, median: 22666560 ns, max: 3060928934 ns, min: 10578529 ns, p99: 1050307726 ns
2020/01/18 16:12:01 gomicro_client.go:124: INFO : mean: 46 ms, median: 22 ms, max: 3060 ms, min: 10 ms, p99: 1050 ms

参考资料

[1]

rpcx-benchmark: https://github.com/rpcx-ecosystem/rpcx-benchmark

[2]

流行的rpc框架benchmark 2018新春版: https://colobu.com/2018/01/31/benchmark-2018-spring-of-popular-rpc-frameworks/

[3]

分布式RPC框架性能大比拼: https://colobu.com/2016/09/05/benchmarks-of-popular-rpc-frameworks/


推荐阅读




喜欢本文的朋友,欢迎关注“Go语言中文网


以上是关于2020 新春流行的RPC框架性能大比拼的主要内容,如果未能解决你的问题,请参考以下文章

转载分布式RPC框架性能大比拼

分布式RPC框架性能大比拼 dubbomotanrpcxgRPCthrift的性能比较

分布式RPC框架性能大比拼 dubbomotanrpcxgRPCthrift的性能比较

分布式RPC框架性能大比拼 dubbomotanrpcxgRPCthrift的性能比较

Java分布式 RPC 框架性能大比拼,Dubbo最差?

Java RPC 分布式框架性能大比拼,Dubbo排老几?