S3上传时报错:Data read has a different length than the expected

Posted 小白码上飞

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了S3上传时报错:Data read has a different length than the expected相关的知识,希望对你有一定的参考价值。

报错信息

使用S3上传文件时,发现存在几类报错。

第一种:Data read has a different length than the expected: dataLength=15932; expectedLength=19241;

这类报错的意思是,在上传时发现,该文件的实际长度和期望长度不一致。

完整的报错堆栈如下:

com.amazonaws.SdkClientException: Data read has a different length than the expected: dataLength=15932; expectedLength=19241; includeSkipped=false; in.getClass()=class com.amazonaws.internal.ResettableInputStream; markedSupported=true; marked=0; resetSinceLastMarked=false; markCount=1; resetCount=0
        at com.amazonaws.util.LengthCheckInputStream.checkLength(LengthCheckInputStream.java:151)
        at com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:109)
        at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
        at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
        at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
        at com.amazonaws.auth.AwsChunkedEncodingInputStream.setUpNextChunk(AwsChunkedEncodingInputStream.java:306)
        at com.amazonaws.auth.AwsChunkedEncodingInputStream.read(AwsChunkedEncodingInputStream.java:172)
        at org.apache.http.entity.InputStreamEntity.writeTo(InputStreamEntity.java:140)
        at com.amazonaws.http.RepeatableInputStreamRequestEntity.writeTo(RepeatableInputStreamRequestEntity.java:160)
        at org.apache.http.impl.DefaultBHttpClientConnection.sendRequestEntity(DefaultBHttpClientConnection.java:156)
        at org.apache.http.impl.conn.CPoolProxy.sendRequestEntity(CPoolProxy.java:160)
        at org.apache.http.protocol.HttpRequestExecutor.doSendRequest(HttpRequestExecutor.java:238)
        at com.amazonaws.http.protocol.SdkHttpRequestExecutor.doSendRequest(SdkHttpRequestExecutor.java:63)
        at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
        at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
        at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
        at org.apache.http.impl.client.InternalHttpClient.doExecute$original$mo6pBbRM(InternalHttpClient.java:185)
        at org.apache.http.impl.client.InternalHttpClient.doExecute$original$mo6pBbRM$accessor$0Mzlaxvy(InternalHttpClient.java)
        at org.apache.http.impl.client.InternalHttpClient$auxiliary$3bqvKzTe.call(Unknown Source)
        at org.apache.skywalking.apm.agent.core.plugin.interceptor.enhance.InstMethodsInter.intercept(InstMethodsInter.java:95)
        at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java)
        at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
        at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
        at com.amazonaws.http.apache.client.impl.SdkHttpClient.execute(SdkHttpClient.java:72)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1258)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1074)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:745)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:719)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:701)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:669)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:651)
        at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:515)
        at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4443)
        at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4390)
        at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1774)
        at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1628)

第二种:Unable to calculate MD5 hash: /tmp/78c20e3adeb1202ade4ceb002cf4bd9e.png (No such file or directory)

这类报错的意思是,s3在上传文件时,会对文件做MD5的校验。在这个过程中发现指定的文件不存在。

这个堆栈信息比较少:

com.amazonaws.SdkClientException: Unable to calculate MD5 hash: /tmp/78c20e3adeb1202ade4ceb002cf4bd9e.png (No such file or directory)
        at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1675)
        at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1628)

原因推测

于是推测,第一种报错应该是因为s3在上传文件时,文件发生了变化导致的。而且可以看到,报错中基本都是expectedLength的长度大于dataLength的长度。那会不会是在上传的时候,这个文件被修改或者重新写入了?所以在重新写入的过程中,文件是不完整的,因此长度不一致。

代码排查

于是排查了一下代码,发现这部分上传的逻辑大概是这样的:

  1. 用时间戳拼接文件名,生成md5值。把这个值当做s3的key(就叫md5key吧)。
  2. 直接返回md5key,保存入库。之后通过线程池异步做上传逻辑
    1. 获取业务传入进来的附件链接,将文件存储到本地服务器,文件名是md5key.jpg。
    2. 调用s3的服务,将md5key.jpg进行上传。
    3. 删除服务器上的md5key.jpg。

问题就出现在这里!

  1. 如果业务方传入多个一样的附件链接(链接A、链接A、链接A),那么在处理的过程中,如果都是在同一毫秒去生成md5key,那是不是这三个链接的md5key都是一样的呢?
  2. 通过线程池去处理这三个文件时,线程1写入文件到md5key.jpg,开始上传。而此时线程2也开始写入文件到md5key.jpg,这时线程1的上传逻辑会发现,文件长度不一致,所以上传失败。
  3. 而当线程2写入md5key.jpg并上传完成后,线程3也开始写入。当线程3写入完成,准备上传时,这时凑巧线程2上传完成,并删除了md5key.jpg,那么线程3就会发现文件不见了,所以报出第二个错误,文件不存在。

排查了异常结果,发现果然是这个原因。并发场景,要考虑的东西还是很多的啊。

结论

  1. Data read has a different length than the expected这个报错,很有可能是文件准备上传时,被另一个写入线程覆盖了。可以按照这个思路去排查问题。
  2. No such file or directory这个报错,那就是如他所说,找不到文件。所以想想为啥文件没了呢?看看程序里有没有删除文件的逻辑呢?

以上是关于S3上传时报错:Data read has a different length than the expected的主要内容,如果未能解决你的问题,请参考以下文章

python 脚本运行时报错: AttributeError: 'module' object has no attribute ***

[Err] 1418 - This function has none of DETERMINISTIC, NO SQL, or READS SQL DATA in its declaration a

客户端上传日志文件时报错

MySQL创建自定义函数提示:This function has none of DETERMINISTIC, NO SQL, or READS SQL DATA in its ......

hadoop用put上传文件时报错

打包新版本上传到AppStore时报错 ERROR ITMS-90034: