java.io.IOException:写入“”字节的请求超出了“条目”的“字节”标头中的大小

Posted

技术标签:

【中文标题】java.io.IOException:写入“”字节的请求超出了“条目”的“字节”标头中的大小【英文标题】:java.io.IOException: request to write '' bytes exceeds size in header of '' bytes for entry '' 【发布时间】:2016-10-03 11:55:25 【问题描述】:

我正在使用GZIPOutputStream 创建一个tar.Gzip 文件,并且我添加了另一个逻辑,如果在压缩文件时捕获到任何异常,我的代码将重试三次。

当我抛出 IOException 来测试我的重试逻辑时,它会抛出以下异常:java.io.IOException: request to write '4096' bytes exceeds size in header of '2644' bytes for entry 'Alldbtypes'

我在行遇到异常:org.apache.commons.io.IOUtils.copyLarge(inputStream, tarStream);

private class CompressionStream extends GZIPOutputStream 
    // Use compression levels from the deflator class
    public CompressionStream(OutputStream out, int compressionLevel) throws IOException 
        super(out);
        def.setLevel(compressionLevel);
    


public void createTAR()
    boolean isSuccessful=false;
    int count = 0;
    int maxTries = 3;
    while(!isSuccessful) 
        InputStream inputStream =null;
        FileOutputStream outputStream =null;
        CompressionStream compressionStream=null;
        OutputStream md5OutputStream = null;
        TarArchiveOutputStream tarStream = null;
        try
            inputStream = new BufferedInputStream(new FileInputStream(rawfile));
            File stagingPath = new File("C:\\Workarea\\6d22b6a3-564f-42b4-be83-9e1573a718cd\\b88beb62-aa65-4ad5-b46c-4f2e9c892259.tar.gz");
            boolean isDeleted = false;
            if(stagingPath.exists())
                isDeleted =  stagingPath.delete();
                if(stagingPath.exists())
                    try 
                        FileUtils.forceDelete(stagingPath);
                    catch (IOException ex)
                        //ignore
                    
                
            
            outputStream = new FileOutputStream(stagingPath);
            if (isCompressionEnabled) 
                compressionStream = new
                        CompressionStream(outputStream, getCompressionLevel(om));
            
            final MessageDigest outputDigest = MessageDigest.getInstance("MD5");
            md5OutputStream = new DigestOutputStream(isCompressionEnabled ? compressionStream : outputStream, outputDigest);
            tarStream = new TarArchiveOutputStream(new BufferedOutputStream(md5OutputStream));
            tarStream.setLongFileMode(TarArchiveOutputStream.LONGFILE_GNU);
            tarStream.setBigNumberMode(TarArchiveOutputStream.BIGNUMBER_STAR);
            TarArchiveEntry entry = new TarArchiveEntry("Alldbtypes");
            entry.setSize(getOriginalSize());
            entry.setModTime(getLastModified().getMillis());
            tarStream.putArchiveEntry(entry);
            org.apache.commons.io.IOUtils.copyLarge(inputStream, tarStream);
            inputStream.close();
            tarStream.closeArchiveEntry();
            tarStream.finish();
            tarStream.close();
            String digest = Hex.encodeHexString(outputDigest.digest());
            setChecksum(digest);
            setIngested(DateTime.now());
            setOriginalSize(FileUtils.sizeOf(stagingPath));
            isSuccessful =true;
         catch (IOException e) 
            if (++count == maxTries) 
                throw new RuntimeException("Exception: " + e.getMessage(), e);
            
         catch (NoSuchAlgorithmException e) 
            throw new RuntimeException(Exception("MD5 hash algo not installed.");
         catch (Exception e) 
            throw new RuntimeException("Exception: " + e.getMessage(), e);
         finally 
            org.apache.commons.io.IOUtils.closeQuietly(inputStream);
            try 
                tarStream.flush();
                tarStream.finish();
             catch (IOException e) 
                e.printStackTrace();
            
            org.apache.commons.io.IOUtils.closeQuietly(tarStream);
            org.apache.commons.io.IOUtils.closeQuietly(compressionStream);
            org.apache.commons.io.IOUtils.closeQuietly(md5OutputStream);
            org.apache.commons.io.IOUtils.closeQuietly(outputStream);
        

    

【问题讨论】:

【参考方案1】:

案件已解决。当要压缩的文件大小不正确时抛出此异常java.io.IOException: request to write '4096' bytes exceeds size in header of '2644' bytes for entry 'Alldbtypes'

TarArchiveEntry entry = new TarArchiveEntry("Alldbtypes");
entry.setSize(getOriginalSize()); 

在我的代码中,getOriginalSize() 在最后再次更新,所以在重试时,原始大小发生了变化,原始大小现在是压缩文件大小,所以它抛出了这个异常。

【讨论】:

以上是关于java.io.IOException:写入“”字节的请求超出了“条目”的“字节”标头中的大小的主要内容,如果未能解决你的问题,请参考以下文章

Java学习之IO字节流

Flink监控信息写入到PushGateway出现 java.io.IOException: Response code from http xx was 200问题

PipedInputStream - 如何避免“java.io.IOException:管道损坏”

Android java.io.IOException: write failed: EBADF (Bad file number)

PipedInputStream - 如何避免“java.io.IOException:Pipe broken”

java学习之字符流与字节流的转换