MediaMuxer 视频文件大小减小(重新压缩,降低分辨率)

Posted

技术标签:

【中文标题】MediaMuxer 视频文件大小减小(重新压缩,降低分辨率)【英文标题】:MediaMuxer video file size reducing (re-compress, decrease resolution) 【发布时间】:2015-07-08 16:52:05 【问题描述】:

我正在寻找减少视频重量的有效方法(作为File,用于上传),显而易见的答案是:让我们降低分辨率! (不需要全高清或 4K,简单的高清对我来说就足够了)我已经尝试了很多应该通过很多 API 工作的方法(需要 10 个),最好的方法是使用android-ffmpeg-java,但是...... -当前的旗舰设备整个过程持续大约 length_of_video*4 秒,而且这个 lib 重量是 9 Mb,这个数量增加了我的应用程序大小......不! (12 Mb 到 1 Mb 是不错的结果,但还是有太多的缺陷)

所以我决定使用原生 Android 方式来做到这一点,MediaMuxerMediaCodec - 它们分别可从 API18 和 API16 获得(旧设备用户:抱歉;但它们也经常有“低分辨率”相机)。下面的方法几乎有效 - MediaMuxer 不尊重 MediaFormat.KEY_WIDTHMediaFormat.KEY_HEIGHT - 提取的 File 被“重新压缩”,权重有点小,但分辨率与原视频File...

那么,问题:如何使用MediaMuxer 和其他随附的类和方法来压缩和重新缩放/更改视频分辨率?

public File getCompressedFile(String videoPath) throws IOException
    MediaExtractor extractor = new MediaExtractor();
    extractor.setDataSource(videoPath);
    int trackCount = extractor.getTrackCount();

    String filePath = videoPath.substring(0, videoPath.lastIndexOf(File.separator));
    String[] splitByDot = videoPath.split("\\.");
    String ext="";
    if(splitByDot!=null && splitByDot.length>1)
        ext = splitByDot[splitByDot.length-1];
    String fileName = videoPath.substring(videoPath.lastIndexOf(File.separator)+1,
                    videoPath.length());
    if(ext.length()>0)
        fileName=fileName.replace("."+ext, "_out."+ext);
    else
        fileName=fileName.concat("_out");

    final File outFile = new File(filePath, fileName);
    if(!outFile.exists())
        outFile.createNewFile();

    MediaMuxer muxer = new MediaMuxer(outFile.getAbsolutePath(),
            MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
    HashMap<Integer, Integer> indexMap = new HashMap<Integer, Integer>(trackCount);
    for (int i = 0; i < trackCount; i++) 
        extractor.selectTrack(i);
        MediaFormat format = extractor.getTrackFormat(i);
        String mime = format.getString(MediaFormat.KEY_MIME);
        if(mime!=null && mime.startsWith("video"))
            int currWidth = format.getInteger(MediaFormat.KEY_WIDTH);
            int currHeight = format.getInteger(MediaFormat.KEY_HEIGHT);
            format.setInteger(MediaFormat.KEY_WIDTH, currWidth>currHeight ? 960 : 540);
            format.setInteger(MediaFormat.KEY_HEIGHT, currWidth>currHeight ? 540 : 960);
            //API19 MediaFormat.KEY_MAX_WIDTH and KEY_MAX_HEIGHT
            format.setInteger("max-width", format.getInteger(MediaFormat.KEY_WIDTH));
            format.setInteger("max-height", format.getInteger(MediaFormat.KEY_HEIGHT));
        
        int dstIndex = muxer.addTrack(format);
        indexMap.put(i, dstIndex);
    

    boolean sawEOS = false;
    int bufferSize = 256 * 1024;
    int offset = 100;
    ByteBuffer dstBuf = ByteBuffer.allocate(bufferSize);
    MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
    muxer.start();
    while (!sawEOS) 
        bufferInfo.offset = offset;
        bufferInfo.size = extractor.readSampleData(dstBuf, offset);
        if (bufferInfo.size < 0) 
            sawEOS = true;
            bufferInfo.size = 0;
         else 
            bufferInfo.presentationTimeUs = extractor.getSampleTime();
            bufferInfo.flags = extractor.getSampleFlags();
            int trackIndex = extractor.getSampleTrackIndex();
            muxer.writeSampleData(indexMap.get(trackIndex), dstBuf,
                    bufferInfo);
            extractor.advance();
        
    

    muxer.stop();
    muxer.release();

    return outFile;

PS。关于 muxer here 的很多有用的东西,上面的代码基于 MediaMuxerTest.java,方法 cloneMediaUsingMuxer

【问题讨论】:

【参考方案1】:

您可以在https://software.intel.com/en-us/articles/intel-inde-media-pack-for-android-tutorials 上试用 Intel INDE Media for Mobile,教程。它有一个示例,展示了如何使用它来转码=重新压缩视频文件。

您可以设置更小的分辨率和\或比特率来输出以获得更小的文件 https://github.com/INDExOS/media-for-mobile/blob/master/Android/samples/apps/src/com/intel/inde/mp/samples/ComposerTranscodeCoreActivity.java

【讨论】:

检查了这个,但是INDE不支持MPEG4 see marked answer here【参考方案2】:

MediaMuxer 不参与视频的压缩或缩放。它所做的只是从 MediaCodec 获取 H.264 输出并将其包装在 .mp4 文件包装器中。

查看您的代码,您正在使用 MediaExtractor 提取 NAL 单元,并立即使用 MediaMuxer 重新包装它们。这应该非常快,并且对视频本身没有影响,因为您只是重新包装 H.264。

要缩放视频,您需要使用 MediaCodec 解码器对视频进行解码,将来自 MediaExtractor 的 NAL 单元输入其中,然后使用 MediaCodec 编码器对其重新编码,将帧传递给 MediaMuxer。

你找到了bigflake.com;另见Grafika。这些都不是您正在寻找的东西,但各种部件都在那里。

最好解码为 Surface,而不是 ByteBuffer。这需要 API 18,但为了理智,最好忘记 MediaCodec 在此之前存在。无论如何,您都需要用于 MediaMuxer 的 API 18。

【讨论】:

谢谢,我已经阅读了一些文档和测试并完成了我想要的。您在不同堆栈问题中的 cmets 和答案非常有帮助!我已经使用了一个测试和Surface 与多个编码器和解码器:)【参考方案3】:

基于bigflake.com/mediacodec/(有关媒体类的绝佳知识来源)我尝试了几种方法,最后ExtractDecodeEditEncodeMuxTest 非常有帮助。 bigflake 网站上的文章中没有描述此测试,但可以在文本中提到的其他类旁边找到HERE。

所以,我从上面提到的ExtractDecodeEditEncodeMuxTest 类中复制了大部分代码,它就是:VideoResolutionChanger。它为我提供了 16 Mb fullHD 的 2Mb 高清视频。好的!而且快!在我的设备上,整个过程比输入视频持续时间长一点,例如10 秒视频输入 -> 11-12 秒处理。使用ffmpeg-java 大约需要 40 秒或更多(应用程序需要多 9 Mb)。

我们开始吧:

VideoResolutionChanger:

@TargetApi(18)
public class VideoResolutionChanger 

    private static final int TIMEOUT_USEC = 10000;

    private static final String OUTPUT_VIDEO_MIME_TYPE = "video/avc";
    private static final int OUTPUT_VIDEO_BIT_RATE = 2048 * 1024;
    private static final int OUTPUT_VIDEO_FRAME_RATE = 30;
    private static final int OUTPUT_VIDEO_IFRAME_INTERVAL = 10;
    private static final int OUTPUT_VIDEO_COLOR_FORMAT =
            MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface;

    private static final String OUTPUT_AUDIO_MIME_TYPE = "audio/mp4a-latm";
    private static final int OUTPUT_AUDIO_CHANNEL_COUNT = 2;
    private static final int OUTPUT_AUDIO_BIT_RATE = 128 * 1024;
    private static final int OUTPUT_AUDIO_AAC_PROFILE =
            MediaCodecInfo.CodecProfileLevel.AACObjectHE;
    private static final int OUTPUT_AUDIO_SAMPLE_RATE_HZ = 44100;

    private int mWidth = 1280;
    private int mHeight = 720;
    private String mOutputFile, mInputFile;

    public String changeResolution(File f)
            throws Throwable 
        mInputFile=f.getAbsolutePath();

        String filePath = mInputFile.substring(0, mInputFile.lastIndexOf(File.separator));
        String[] splitByDot = mInputFile.split("\\.");
        String ext="";
        if(splitByDot!=null && splitByDot.length>1)
            ext = splitByDot[splitByDot.length-1];
        String fileName = mInputFile.substring(mInputFile.lastIndexOf(File.separator)+1,
                mInputFile.length());
        if(ext.length()>0)
            fileName=fileName.replace("."+ext, "_out.mp4");
        else
            fileName=fileName.concat("_out.mp4");

        final File outFile = new File(Environment.getExternalStorageDirectory(), fileName);
        if(!outFile.exists())
            outFile.createNewFile();

        mOutputFile=outFile.getAbsolutePath();

        ChangerWrapper.changeResolutionInSeparatedThread(this);

        return mOutputFile;
    

    private static class ChangerWrapper implements Runnable 

        private Throwable mThrowable;
        private VideoResolutionChanger mChanger;

        private ChangerWrapper(VideoResolutionChanger changer) 
            mChanger = changer;
        

        @Override
        public void run() 
            try 
                mChanger.prepareAndChangeResolution();
             catch (Throwable th) 
                mThrowable = th;
            
        

        public static void changeResolutionInSeparatedThread(VideoResolutionChanger changer)
                throws Throwable 
            ChangerWrapper wrapper = new ChangerWrapper(changer);
            Thread th = new Thread(wrapper, ChangerWrapper.class.getSimpleName());
            th.start();
            th.join();
            if (wrapper.mThrowable != null)
                throw wrapper.mThrowable;
        
    

    private void prepareAndChangeResolution() throws Exception 
        Exception exception = null;

        MediaCodecInfo videoCodecInfo = selectCodec(OUTPUT_VIDEO_MIME_TYPE);
        if (videoCodecInfo == null)
            return;
        MediaCodecInfo audioCodecInfo = selectCodec(OUTPUT_AUDIO_MIME_TYPE);
        if (audioCodecInfo == null)
            return;

        MediaExtractor videoExtractor = null;
        MediaExtractor audioExtractor = null;
        OutputSurface outputSurface = null;
        MediaCodec videoDecoder = null;
        MediaCodec audioDecoder = null;
        MediaCodec videoEncoder = null;
        MediaCodec audioEncoder = null;
        MediaMuxer muxer = null;
        InputSurface inputSurface = null;
        try 
            videoExtractor = createExtractor();
            int videoInputTrack = getAndSelectVideoTrackIndex(videoExtractor);
            MediaFormat inputFormat = videoExtractor.getTrackFormat(videoInputTrack);

            MediaMetadataRetriever m = new MediaMetadataRetriever();
            m.setDataSource(mInputFile);
            int inputWidth, inputHeight;
            try 
                inputWidth = Integer.parseInt(m.extractMetadata(MediaMetadataRetriever.METADATA_KEY_VIDEO_WIDTH));
                inputHeight = Integer.parseInt(m.extractMetadata(MediaMetadataRetriever.METADATA_KEY_VIDEO_HEIGHT));
             catch (Exception e) 
                Bitmap thumbnail = m.getFrameAtTime();
                inputWidth = thumbnail.getWidth();
                inputHeight = thumbnail.getHeight();
                thumbnail.recycle();
            

            if(inputWidth>inputHeight)
                if(mWidth<mHeight)
                    int w = mWidth;
                    mWidth=mHeight;
                    mHeight=w;
                
            
            else
                if(mWidth>mHeight)
                    int w = mWidth;
                    mWidth=mHeight;
                    mHeight=w;
                
            

            MediaFormat outputVideoFormat =
                    MediaFormat.createVideoFormat(OUTPUT_VIDEO_MIME_TYPE, mWidth, mHeight);
            outputVideoFormat.setInteger(
                    MediaFormat.KEY_COLOR_FORMAT, OUTPUT_VIDEO_COLOR_FORMAT);
            outputVideoFormat.setInteger(MediaFormat.KEY_BIT_RATE, OUTPUT_VIDEO_BIT_RATE);
            outputVideoFormat.setInteger(MediaFormat.KEY_FRAME_RATE, OUTPUT_VIDEO_FRAME_RATE);
            outputVideoFormat.setInteger(
                    MediaFormat.KEY_I_FRAME_INTERVAL, OUTPUT_VIDEO_IFRAME_INTERVAL);

            AtomicReference<Surface> inputSurfaceReference = new AtomicReference<Surface>();
            videoEncoder = createVideoEncoder(
                    videoCodecInfo, outputVideoFormat, inputSurfaceReference);
            inputSurface = new InputSurface(inputSurfaceReference.get());
            inputSurface.makeCurrent();

            outputSurface = new OutputSurface();
            videoDecoder = createVideoDecoder(inputFormat, outputSurface.getSurface());

            audioExtractor = createExtractor();
            int audioInputTrack = getAndSelectAudioTrackIndex(audioExtractor);
            MediaFormat inputAudioFormat = audioExtractor.getTrackFormat(audioInputTrack);
            MediaFormat outputAudioFormat =
                MediaFormat.createAudioFormat(inputAudioFormat.getString(MediaFormat.KEY_MIME),
                        inputAudioFormat.getInteger(MediaFormat.KEY_SAMPLE_RATE),
                        inputAudioFormat.getInteger(MediaFormat.KEY_CHANNEL_COUNT));
            outputAudioFormat.setInteger(MediaFormat.KEY_BIT_RATE, OUTPUT_AUDIO_BIT_RATE);
            outputAudioFormat.setInteger(MediaFormat.KEY_AAC_PROFILE, OUTPUT_AUDIO_AAC_PROFILE);

            audioEncoder = createAudioEncoder(audioCodecInfo, outputAudioFormat);
            audioDecoder = createAudioDecoder(inputAudioFormat);

            muxer = new MediaMuxer(mOutputFile, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);

            changeResolution(videoExtractor, audioExtractor,
                    videoDecoder, videoEncoder,
                    audioDecoder, audioEncoder,
                    muxer, inputSurface, outputSurface);
         finally 
            try 
                if (videoExtractor != null)
                    videoExtractor.release();
             catch(Exception e) 
                if (exception == null)
                    exception = e;
            
            try 
                if (audioExtractor != null)
                    audioExtractor.release();
             catch(Exception e) 
                if (exception == null)
                    exception = e;
            
            try 
                if (videoDecoder != null) 
                    videoDecoder.stop();
                    videoDecoder.release();
                
             catch(Exception e) 
                if (exception == null)
                    exception = e;
            
            try 
                if (outputSurface != null) 
                    outputSurface.release();
                
             catch(Exception e) 
                if (exception == null)
                    exception = e;
            
            try 
                if (videoEncoder != null) 
                    videoEncoder.stop();
                    videoEncoder.release();
                
             catch(Exception e) 
                if (exception == null)
                    exception = e;
            
            try 
                if (audioDecoder != null) 
                    audioDecoder.stop();
                    audioDecoder.release();
                
             catch(Exception e) 
                if (exception == null)
                    exception = e;
            
            try 
                if (audioEncoder != null) 
                    audioEncoder.stop();
                    audioEncoder.release();
                
             catch(Exception e) 
                if (exception == null)
                    exception = e;
            
            try 
                if (muxer != null) 
                    muxer.stop();
                    muxer.release();
                
             catch(Exception e) 
                if (exception == null)
                    exception = e;
            
            try 
                if (inputSurface != null)
                    inputSurface.release();
             catch(Exception e) 
                if (exception == null)
                    exception = e;
            
        
        if (exception != null)
            throw exception;
    

    private MediaExtractor createExtractor() throws IOException 
        MediaExtractor extractor;
        extractor = new MediaExtractor();
        extractor.setDataSource(mInputFile);
        return extractor;
    

    private MediaCodec createVideoDecoder(MediaFormat inputFormat, Surface surface) throws IOException 
        MediaCodec decoder = MediaCodec.createDecoderByType(getMimeTypeFor(inputFormat));
        decoder.configure(inputFormat, surface, null, 0);
        decoder.start();
        return decoder;
    

    private MediaCodec createVideoEncoder(MediaCodecInfo codecInfo, MediaFormat format,
            AtomicReference<Surface> surfaceReference) throws IOException 
        MediaCodec encoder = MediaCodec.createByCodecName(codecInfo.getName());
        encoder.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
        surfaceReference.set(encoder.createInputSurface());
        encoder.start();
        return encoder;
    

    private MediaCodec createAudioDecoder(MediaFormat inputFormat) throws IOException 
        MediaCodec decoder = MediaCodec.createDecoderByType(getMimeTypeFor(inputFormat));
        decoder.configure(inputFormat, null, null, 0);
        decoder.start();
        return decoder;
    

    private MediaCodec createAudioEncoder(MediaCodecInfo codecInfo, MediaFormat format) throws IOException 
        MediaCodec encoder = MediaCodec.createByCodecName(codecInfo.getName());
        encoder.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
        encoder.start();
        return encoder;
    

    private int getAndSelectVideoTrackIndex(MediaExtractor extractor) 
        for (int index = 0; index < extractor.getTrackCount(); ++index) 
            if (isVideoFormat(extractor.getTrackFormat(index))) 
                extractor.selectTrack(index);
                return index;
            
        
        return -1;
    
    private int getAndSelectAudioTrackIndex(MediaExtractor extractor) 
        for (int index = 0; index < extractor.getTrackCount(); ++index) 
            if (isAudioFormat(extractor.getTrackFormat(index))) 
                extractor.selectTrack(index);
                return index;
            
        
        return -1;
    

    private void changeResolution(MediaExtractor videoExtractor, MediaExtractor audioExtractor,
            MediaCodec videoDecoder, MediaCodec videoEncoder,
            MediaCodec audioDecoder, MediaCodec audioEncoder,
            MediaMuxer muxer,
            InputSurface inputSurface, OutputSurface outputSurface) 
        ByteBuffer[] videoDecoderInputBuffers = null;
        ByteBuffer[] videoDecoderOutputBuffers = null;
        ByteBuffer[] videoEncoderOutputBuffers = null;
        MediaCodec.BufferInfo videoDecoderOutputBufferInfo = null;
        MediaCodec.BufferInfo videoEncoderOutputBufferInfo = null;

        videoDecoderInputBuffers = videoDecoder.getInputBuffers();
        videoDecoderOutputBuffers = videoDecoder.getOutputBuffers();
        videoEncoderOutputBuffers = videoEncoder.getOutputBuffers();
        videoDecoderOutputBufferInfo = new MediaCodec.BufferInfo();
        videoEncoderOutputBufferInfo = new MediaCodec.BufferInfo();

        ByteBuffer[] audioDecoderInputBuffers = null;
        ByteBuffer[] audioDecoderOutputBuffers = null;
        ByteBuffer[] audioEncoderInputBuffers = null;
        ByteBuffer[] audioEncoderOutputBuffers = null;
        MediaCodec.BufferInfo audioDecoderOutputBufferInfo = null;
        MediaCodec.BufferInfo audioEncoderOutputBufferInfo = null;

        audioDecoderInputBuffers = audioDecoder.getInputBuffers();
        audioDecoderOutputBuffers =  audioDecoder.getOutputBuffers();
        audioEncoderInputBuffers = audioEncoder.getInputBuffers();
        audioEncoderOutputBuffers = audioEncoder.getOutputBuffers();
        audioDecoderOutputBufferInfo = new MediaCodec.BufferInfo();
        audioEncoderOutputBufferInfo = new MediaCodec.BufferInfo();

        MediaFormat decoderOutputVideoFormat = null;
        MediaFormat decoderOutputAudioFormat = null;
        MediaFormat encoderOutputVideoFormat = null;
        MediaFormat encoderOutputAudioFormat = null;
        int outputVideoTrack = -1;
        int outputAudioTrack = -1;

        boolean videoExtractorDone = false;
        boolean videoDecoderDone = false;
        boolean videoEncoderDone = false;

        boolean audioExtractorDone = false;
        boolean audioDecoderDone = false;
        boolean audioEncoderDone = false;

        int pendingAudioDecoderOutputBufferIndex = -1;
        boolean muxing = false;
        while ((!videoEncoderDone) || (!audioEncoderDone)) 
            while (!videoExtractorDone
                    && (encoderOutputVideoFormat == null || muxing)) 
                int decoderInputBufferIndex = videoDecoder.dequeueInputBuffer(TIMEOUT_USEC);
                if (decoderInputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER)
                    break;

                ByteBuffer decoderInputBuffer = videoDecoderInputBuffers[decoderInputBufferIndex];
                int size = videoExtractor.readSampleData(decoderInputBuffer, 0);
                long presentationTime = videoExtractor.getSampleTime();

                if (size >= 0) 
                    videoDecoder.queueInputBuffer(
                            decoderInputBufferIndex,
                            0,
                            size,
                            presentationTime,
                            videoExtractor.getSampleFlags());
                
                videoExtractorDone = !videoExtractor.advance();
                if (videoExtractorDone)
                    videoDecoder.queueInputBuffer(decoderInputBufferIndex,
                            0, 0, 0,  MediaCodec.BUFFER_FLAG_END_OF_STREAM);
                break;
            

            while (!audioExtractorDone
                    && (encoderOutputAudioFormat == null || muxing)) 
                int decoderInputBufferIndex = audioDecoder.dequeueInputBuffer(TIMEOUT_USEC);
                if (decoderInputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER)
                    break;

                ByteBuffer decoderInputBuffer = audioDecoderInputBuffers[decoderInputBufferIndex];
                int size = audioExtractor.readSampleData(decoderInputBuffer, 0);
                long presentationTime = audioExtractor.getSampleTime();

                if (size >= 0)
                    audioDecoder.queueInputBuffer(decoderInputBufferIndex, 0, size,
                            presentationTime, audioExtractor.getSampleFlags());

                audioExtractorDone = !audioExtractor.advance();
                if (audioExtractorDone)
                    audioDecoder.queueInputBuffer(decoderInputBufferIndex, 0, 0,
                            0, MediaCodec.BUFFER_FLAG_END_OF_STREAM);

                break;
            

            while (!videoDecoderDone
                    && (encoderOutputVideoFormat == null || muxing)) 
                int decoderOutputBufferIndex =
                        videoDecoder.dequeueOutputBuffer(
                                videoDecoderOutputBufferInfo, TIMEOUT_USEC);
                if (decoderOutputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER)
                    break;

                if (decoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) 
                    videoDecoderOutputBuffers = videoDecoder.getOutputBuffers();
                    break;
                
                if (decoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) 
                    decoderOutputVideoFormat = videoDecoder.getOutputFormat();
                    break;
                

                ByteBuffer decoderOutputBuffer =
                        videoDecoderOutputBuffers[decoderOutputBufferIndex];
                if ((videoDecoderOutputBufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG)
                        != 0) 
                    videoDecoder.releaseOutputBuffer(decoderOutputBufferIndex, false);
                    break;
                

                boolean render = videoDecoderOutputBufferInfo.size != 0;
                videoDecoder.releaseOutputBuffer(decoderOutputBufferIndex, render);
                if (render) 
                    outputSurface.awaitNewImage();
                    outputSurface.drawImage();
                    inputSurface.setPresentationTime(
                            videoDecoderOutputBufferInfo.presentationTimeUs * 1000);
                    inputSurface.swapBuffers();
                
                if ((videoDecoderOutputBufferInfo.flags
                        & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) 
                    videoDecoderDone = true;
                    videoEncoder.signalEndOfInputStream();
                
                break;
            

            while (!audioDecoderDone && pendingAudioDecoderOutputBufferIndex == -1
                    && (encoderOutputAudioFormat == null || muxing)) 
                int decoderOutputBufferIndex =
                        audioDecoder.dequeueOutputBuffer(
                                audioDecoderOutputBufferInfo, TIMEOUT_USEC);
                if (decoderOutputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER)
                    break;

                if (decoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) 
                    audioDecoderOutputBuffers = audioDecoder.getOutputBuffers();
                    break;
                
                if (decoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) 
                    decoderOutputAudioFormat = audioDecoder.getOutputFormat();
                    break;
                
                ByteBuffer decoderOutputBuffer =
                        audioDecoderOutputBuffers[decoderOutputBufferIndex];
                if ((audioDecoderOutputBufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG)
                        != 0) 
                    audioDecoder.releaseOutputBuffer(decoderOutputBufferIndex, false);
                    break;
                
                pendingAudioDecoderOutputBufferIndex = decoderOutputBufferIndex;
                break;
            

            while (pendingAudioDecoderOutputBufferIndex != -1) 
                int encoderInputBufferIndex = audioEncoder.dequeueInputBuffer(TIMEOUT_USEC);
                ByteBuffer encoderInputBuffer = audioEncoderInputBuffers[encoderInputBufferIndex];
                int size = audioDecoderOutputBufferInfo.size;
                long presentationTime = audioDecoderOutputBufferInfo.presentationTimeUs;

                if (size >= 0) 
                    ByteBuffer decoderOutputBuffer =
                            audioDecoderOutputBuffers[pendingAudioDecoderOutputBufferIndex]
                                    .duplicate();
                    decoderOutputBuffer.position(audioDecoderOutputBufferInfo.offset);
                    decoderOutputBuffer.limit(audioDecoderOutputBufferInfo.offset + size);
                    encoderInputBuffer.position(0);
                    encoderInputBuffer.put(decoderOutputBuffer);
                    audioEncoder.queueInputBuffer(
                            encoderInputBufferIndex,
                            0,
                            size,
                            presentationTime,
                            audioDecoderOutputBufferInfo.flags);
                
                audioDecoder.releaseOutputBuffer(pendingAudioDecoderOutputBufferIndex, false);
                pendingAudioDecoderOutputBufferIndex = -1;
                if ((audioDecoderOutputBufferInfo.flags
                        & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0)
                    audioDecoderDone = true;

                break;
            

            while (!videoEncoderDone
                    && (encoderOutputVideoFormat == null || muxing)) 
                int encoderOutputBufferIndex = videoEncoder.dequeueOutputBuffer(
                        videoEncoderOutputBufferInfo, TIMEOUT_USEC);
                if (encoderOutputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER)
                    break;
                if (encoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) 
                    videoEncoderOutputBuffers = videoEncoder.getOutputBuffers();
                    break;
                
                if (encoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) 
                    encoderOutputVideoFormat = videoEncoder.getOutputFormat();
                    break;
                

                ByteBuffer encoderOutputBuffer =
                        videoEncoderOutputBuffers[encoderOutputBufferIndex];
                if ((videoEncoderOutputBufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG)
                        != 0) 
                    videoEncoder.releaseOutputBuffer(encoderOutputBufferIndex, false);
                    break;
                
                if (videoEncoderOutputBufferInfo.size != 0) 
                    muxer.writeSampleData(
                            outputVideoTrack, encoderOutputBuffer, videoEncoderOutputBufferInfo);
                
                if ((videoEncoderOutputBufferInfo.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM)
                        != 0) 
                    videoEncoderDone = true;
                
                videoEncoder.releaseOutputBuffer(encoderOutputBufferIndex, false);
                break;
            

            while (!audioEncoderDone
                    && (encoderOutputAudioFormat == null || muxing)) 
                int encoderOutputBufferIndex = audioEncoder.dequeueOutputBuffer(
                        audioEncoderOutputBufferInfo, TIMEOUT_USEC);
                if (encoderOutputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER) 
                    break;
                
                if (encoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) 
                    audioEncoderOutputBuffers = audioEncoder.getOutputBuffers();
                    break;
                
                if (encoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) 
                    encoderOutputAudioFormat = audioEncoder.getOutputFormat();
                    break;
                

                ByteBuffer encoderOutputBuffer =
                        audioEncoderOutputBuffers[encoderOutputBufferIndex];
                if ((audioEncoderOutputBufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG)
                        != 0) 
                    audioEncoder.releaseOutputBuffer(encoderOutputBufferIndex, false);
                    break;
                
                if (audioEncoderOutputBufferInfo.size != 0)
                    muxer.writeSampleData(
                            outputAudioTrack, encoderOutputBuffer, audioEncoderOutputBufferInfo);
                if ((audioEncoderOutputBufferInfo.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM)
                        != 0)
                    audioEncoderDone = true;

                audioEncoder.releaseOutputBuffer(encoderOutputBufferIndex, false);

                break;
            
            if (!muxing && (encoderOutputAudioFormat != null)
                    && (encoderOutputVideoFormat != null)) 
                outputVideoTrack = muxer.addTrack(encoderOutputVideoFormat);
                outputAudioTrack = muxer.addTrack(encoderOutputAudioFormat);
                muxer.start();
                muxing = true;
            
        
    

    private static boolean isVideoFormat(MediaFormat format) 
        return getMimeTypeFor(format).startsWith("video/");
    
    private static boolean isAudioFormat(MediaFormat format) 
        return getMimeTypeFor(format).startsWith("audio/");
    
    private static String getMimeTypeFor(MediaFormat format) 
        return format.getString(MediaFormat.KEY_MIME);
    

    private static MediaCodecInfo selectCodec(String mimeType) 
        int numCodecs = MediaCodecList.getCodecCount();
        for (int i = 0; i < numCodecs; i++) 
            MediaCodecInfo codecInfo = MediaCodecList.getCodecInfoAt(i);
            if (!codecInfo.isEncoder()) 
                continue;
            
            String[] types = codecInfo.getSupportedTypes();
            for (int j = 0; j < types.length; j++) 
                if (types[j].equalsIgnoreCase(mimeType)) 
                    return codecInfo;
                
            
        
        return null;
    

它还需要InputSurfaceOutputSurfaceTextureRender,它们位于ExtractDecodeEditEncodeMuxTest 旁边(HERE 链接上方)。将这三个与VideoResolutionChanger 放在同一个包中并像这样使用它:

try
    String pathToReEncodedFile =
        new VideoResolutionChanger().changeResolution(videoFilePath);
catch(Throwable t)/* smth wrong :( */

其中videoFilePath 可以使用file.getAbsolutePath()File 获得。

我知道这不是最干净,也可能不是最有效/最有效的方式,但过去两天我一直在寻找类似的代码,发现了很多主题,其中大部分将我重定向到 INDE、ffmpeg 或 jcodec,其他没有正确的答案。所以我把它留在这里,明智地使用它!

限制:

上面 use-it-like-this sn-p 不能在主循环线程 (ui) 中启动,例如直接进入Activity。最好的方法是create IntentService 和pass 在Intents 额外Bundle 中输入文件路径String。然后你可以直接在onHandleIntent里面运行changeResolution; API18及以上(MediaMuxer引入); API18当然需要WRITE_EXTERNAL_STORAGE,API19及以上有这个“内置”;

@fadden 感谢您的工作和支持! :)

【讨论】:

嗨@snachmsm,感谢您的惊人回复。我在使用此代码时遇到问题。我每次都收到surface wait timed out 异常(无论我设置的Timeout_wait 时间是多少),就在那个异常之后,我收到new frame available,但是到那时代码已经停止运行。有什么我想念的吗? @Chetan 也许 [***.com/questions/22457623/… 回答)会帮助你(抱歉“小”延迟......) 优秀的工作示例虽然我遇到了一件有趣的事情 - 当触摸音频编解码器参数中的任何内容时 - 比如将音频通道从 2 更改为 1 - 编码的视频以半速播放(音频+视频是慢动作...) - 你能解释一下吗? (实际上,即使没有改变任何东西,我也注意到它发生了,因为原始音频采样率为 48Khz,编码音频采样率为 44.1Khz,视频速度变慢了一点(±10%)) @snachmsm 我在 OutputSurface 文件中也有相同的 surface wait timed out,您可以给我一些建议吗? 目前我的工作中没有处理这个类,但我知道负责这部分代码的同事也计划添加这个功能(progres),但经过一番检查他说这很难,优先级低,所以没那么快......抱歉,目前我帮不了你【参考方案4】:

我不会介意这个问题的实现和编码问题。但是我们经历了同样的灾难,因为 ffmpeg 将我们的应用程序大小至少增加了 19MB,我正在使用这个 *** 问题来提出一个没有 ffmpeg 的库。显然linkedin 的人以前做过。检查this article。

该项目名为 LiTr 并且是available on github。它使用 android MediaCodec 和 MediaMuxer,因此如果需要,您可以参考代码以获取有关您自己的项目的帮助。这个问题是 4 年前提出的,但我希望这对现在的人有所帮助。

【讨论】:

【参考方案5】:

使用 compile 'com.zolad:videoslimmer:1.0.0'

【讨论】:

【参考方案6】:

VideoResolutionChanger.kt

class VideoResolutionChanger 

private val TIMEOUT_USEC = 10000

private val OUTPUT_VIDEO_MIME_TYPE = "video/avc"
private val OUTPUT_VIDEO_BIT_RATE = 2048 * 1024
private val OUTPUT_VIDEO_FRAME_RATE = 60
private val OUTPUT_VIDEO_IFRAME_INTERVAL = 1
private val OUTPUT_VIDEO_COLOR_FORMAT = MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface

private val OUTPUT_AUDIO_MIME_TYPE = "audio/mp4a-latm"
private val OUTPUT_AUDIO_CHANNEL_COUNT = 2
private val OUTPUT_AUDIO_BIT_RATE = 128 * 1024
private val OUTPUT_AUDIO_AAC_PROFILE = MediaCodecInfo.CodecProfileLevel.AACObjectHE
private val OUTPUT_AUDIO_SAMPLE_RATE_HZ = 44100

private var mWidth = 1920
private var mHeight = 1080
private var mOutputFile : String? = null
private var mInputFile : String? = null

private var mTotalTime : Int = 0

@Throws(Throwable::class)

fun changeResolution(f: File): String? 

    mInputFile = f.absolutePath

    val filePath : String? = mInputFile!!.substring(0, mInputFile!!.lastIndexOf(File.separator))
    val splitByDot: Array<String> = mInputFile!!.split("\\.").toTypedArray()
    var ext = ""
    if (splitByDot.size > 1) ext = splitByDot[splitByDot.size - 1]
    var fileName: String = mInputFile!!.substring(
        mInputFile!!.lastIndexOf(File.separator) + 1,
        mInputFile!!.length
    )

    fileName = if (ext.length > 0) fileName.replace(".$ext", "_out.mp4") else fileName + "_out.mp4"

    mOutputFile = outFile.getAbsolutePath()

    ChangerWrapper.changeResolutionInSeparatedThread(this)

    return mOutputFile



private class ChangerWrapper private constructor(private val mChanger: VideoResolutionChanger) :
    Runnable 

    private var mThrowable : Throwable? = null

    override fun run() 

        try 

            mChanger.prepareAndChangeResolution()

         catch (th: Throwable) 

            mThrowable = th

        

    

    companion object 
        @Throws(Throwable::class)

        fun changeResolutionInSeparatedThread(changer: VideoResolutionChanger) 

            val wrapper = ChangerWrapper(changer)
            val th = Thread(wrapper, ChangerWrapper::class.java.simpleName)
            th.start()
            th.join()
            if (wrapper.mThrowable != null) throw wrapper.mThrowable!!

        

    



@Throws(Exception::class)
private fun prepareAndChangeResolution() 

    var exception: Exception? = null
    val videoCodecInfo = selectCodec(OUTPUT_VIDEO_MIME_TYPE) ?: return
    val audioCodecInfo = selectCodec(OUTPUT_AUDIO_MIME_TYPE) ?: return
    var videoExtractor : MediaExtractor? = null
    var audioExtractor : MediaExtractor? = null
    var outputSurface : OutputSurface? = null
    var videoDecoder : MediaCodec? = null
    var audioDecoder : MediaCodec? = null
    var videoEncoder : MediaCodec? = null
    var audioEncoder : MediaCodec? = null
    var muxer : MediaMuxer? = null
    var inputSurface : InputSurface? = null
    try 
        videoExtractor = createExtractor()
        val videoInputTrack = getAndSelectVideoTrackIndex(videoExtractor)
        val inputFormat = videoExtractor!!.getTrackFormat(videoInputTrack)
        val m = MediaMetadataRetriever()
        m.setDataSource(mInputFile)
        var inputWidth: Int
        var inputHeight: Int

        try 
            inputWidth =
                m.extractMetadata(MediaMetadataRetriever.METADATA_KEY_VIDEO_WIDTH)!!.toInt()
            inputHeight =
                m.extractMetadata(MediaMetadataRetriever.METADATA_KEY_VIDEO_HEIGHT)!!.toInt()
            mTotalTime =
                m.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION)!!.toInt() * 1000
         catch (e: Exception) 
            val thumbnail = m.frameAtTime
            inputWidth = thumbnail!!.width
            inputHeight = thumbnail.height
            thumbnail.recycle()
        
        if (inputWidth > inputHeight) 
            if (mWidth < mHeight) 
                val w = mWidth
                mWidth = mHeight
                mHeight = w
            
         else 
            if (mWidth > mHeight) 
                val w = mWidth
                mWidth = mHeight
                mHeight = w
            
        

        val outputVideoFormat =
            MediaFormat.createVideoFormat(OUTPUT_VIDEO_MIME_TYPE, mWidth, mHeight)

        outputVideoFormat.setInteger(
            MediaFormat.KEY_COLOR_FORMAT, OUTPUT_VIDEO_COLOR_FORMAT
        )

        outputVideoFormat.setInteger(MediaFormat.KEY_BIT_RATE, OUTPUT_VIDEO_BIT_RATE)

        outputVideoFormat.setInteger(MediaFormat.KEY_FRAME_RATE, OUTPUT_VIDEO_FRAME_RATE)

        outputVideoFormat.setInteger(
            MediaFormat.KEY_I_FRAME_INTERVAL, OUTPUT_VIDEO_IFRAME_INTERVAL
        )

        val inputSurfaceReference: AtomicReference<Surface> = AtomicReference<Surface>()

        videoEncoder = createVideoEncoder(
            videoCodecInfo, outputVideoFormat, inputSurfaceReference
        )

        inputSurface = InputSurface(inputSurfaceReference.get())
        inputSurface.makeCurrent()

        outputSurface = OutputSurface()

        videoDecoder = createVideoDecoder(inputFormat, outputSurface!!.surface!!);

        audioExtractor = createExtractor()

        val audioInputTrack = getAndSelectAudioTrackIndex(audioExtractor)

        val inputAudioFormat = audioExtractor!!.getTrackFormat(audioInputTrack)

        val outputAudioFormat = MediaFormat.createAudioFormat(
            inputAudioFormat.getString(MediaFormat.KEY_MIME)!!,
            inputAudioFormat.getInteger(MediaFormat.KEY_SAMPLE_RATE),
            inputAudioFormat.getInteger(MediaFormat.KEY_CHANNEL_COUNT)
        )
        outputAudioFormat.setInteger(MediaFormat.KEY_BIT_RATE, OUTPUT_AUDIO_BIT_RATE)
        outputAudioFormat.setInteger(MediaFormat.KEY_AAC_PROFILE, OUTPUT_AUDIO_AAC_PROFILE)

        audioEncoder = createAudioEncoder(audioCodecInfo, outputAudioFormat)
        audioDecoder = createAudioDecoder(inputAudioFormat)

        muxer = MediaMuxer(mOutputFile!!, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4)

        changeResolution(
            videoExtractor, audioExtractor,
            videoDecoder, videoEncoder,
            audioDecoder, audioEncoder,
            muxer, inputSurface, outputSurface
        )

     finally 

        try 
            videoExtractor?.release()
         catch (e: Exception) 
            if (exception == null) exception = e
        

        try 
            audioExtractor?.release()
         catch (e: Exception) 
            if (exception == null) exception = e
        

        try 
            if (videoDecoder != null) 
                videoDecoder.stop()
                videoDecoder.release()
            
         catch (e: Exception) 
            if (exception == null) exception = e
        

        try 
            outputSurface?.release()
         catch (e: Exception) 
            if (exception == null) exception = e
        

        try 
            if (videoEncoder != null) 
                videoEncoder.stop()
                videoEncoder.release()
            
         catch (e: Exception) 
            if (exception == null) exception = e
        

        try 
            if (audioDecoder != null) 
                audioDecoder.stop()
                audioDecoder.release()
            
         catch (e: Exception) 
            if (exception == null) exception = e
        

        try 
            if (audioEncoder != null) 
                audioEncoder.stop()
                audioEncoder.release()
            
         catch (e: Exception) 
            if (exception == null) exception = e
        

        try 
            if (muxer != null) 
                muxer.stop()
                muxer.release()
            
         catch (e: Exception) 
            if (exception == null) exception = e
        

        try 
            inputSurface?.release()
         catch (e: Exception) 
            if (exception == null) exception = e
        

    

    if (exception != null) throw exception



@Throws(IOException::class)
private fun createExtractor(): MediaExtractor? 

    val extractor : MediaExtractor = MediaExtractor()

    mInputFile?.let  extractor.setDataSource(it) 

    return extractor



@Throws(IOException::class)
private fun createVideoDecoder(inputFormat: MediaFormat, surface: Surface): MediaCodec? 

    val decoder = MediaCodec.createDecoderByType(getMimeTypeFor(inputFormat)!!)
    decoder.configure(inputFormat, surface, null, 0)
    decoder.start()

    return decoder



@Throws(IOException::class)
private fun createVideoEncoder(
    codecInfo: MediaCodecInfo, format: MediaFormat,
    surfaceReference: AtomicReference<Surface>
): MediaCodec? 

    val encoder = MediaCodec.createByCodecName(codecInfo.name)
    encoder.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE)

    surfaceReference.set(encoder.createInputSurface())

    encoder.start()

    return encoder



@Throws(IOException::class)
private fun createAudioDecoder(inputFormat: MediaFormat): MediaCodec? 

    val decoder = MediaCodec.createDecoderByType(getMimeTypeFor(inputFormat)!!)
    decoder.configure(inputFormat, null, null, 0)
    decoder.start()

    return decoder



@Throws(IOException::class)
private fun createAudioEncoder(codecInfo: MediaCodecInfo, format: MediaFormat): MediaCodec? 

    val encoder = MediaCodec.createByCodecName(codecInfo.name)
    encoder.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE)
    encoder.start()

    return encoder



private fun getAndSelectVideoTrackIndex(extractor: MediaExtractor?): Int 

    for (index in 0 until extractor!!.trackCount) 

        if (isVideoFormat(extractor.getTrackFormat(index))) 

            extractor.selectTrack(index)

            return index

        

    

    return -1



private fun getAndSelectAudioTrackIndex(extractor: MediaExtractor?): Int 

    for (index in 0 until extractor!!.trackCount) 

        if (isAudioFormat(extractor.getTrackFormat(index))) 

            extractor.selectTrack(index)

            return index

        

    

    return -1



private fun changeResolution(
    videoExtractor: MediaExtractor?, audioExtractor: MediaExtractor?,
    videoDecoder: MediaCodec?, videoEncoder: MediaCodec?,
    audioDecoder: MediaCodec?, audioEncoder: MediaCodec?,
    muxer: MediaMuxer,
    inputSurface: InputSurface?, outputSurface: OutputSurface?
) 

    var videoDecoderInputBuffers : Array<ByteBuffer?>? = null
    var videoDecoderOutputBuffers : Array<ByteBuffer?>? = null
    var videoEncoderOutputBuffers : Array<ByteBuffer?>? = null
    var videoDecoderOutputBufferInfo : MediaCodec.BufferInfo? = null
    var videoEncoderOutputBufferInfo : MediaCodec.BufferInfo? = null
    videoDecoderInputBuffers = videoDecoder!!.inputBuffers
    videoDecoderOutputBuffers = videoDecoder.outputBuffers
    videoEncoderOutputBuffers = videoEncoder!!.outputBuffers
    videoDecoderOutputBufferInfo = MediaCodec.BufferInfo()
    videoEncoderOutputBufferInfo = MediaCodec.BufferInfo()
    var audioDecoderInputBuffers : Array<ByteBuffer?>? = null
    var audioDecoderOutputBuffers : Array<ByteBuffer>? = null
    var audioEncoderInputBuffers : Array<ByteBuffer>? = null
    var audioEncoderOutputBuffers : Array<ByteBuffer?>? = null
    var audioDecoderOutputBufferInfo : MediaCodec.BufferInfo? = null
    var audioEncoderOutputBufferInfo : MediaCodec.BufferInfo? = null
    audioDecoderInputBuffers = audioDecoder!!.inputBuffers
    audioDecoderOutputBuffers = audioDecoder.outputBuffers
    audioEncoderInputBuffers = audioEncoder!!.inputBuffers
    audioEncoderOutputBuffers = audioEncoder.outputBuffers
    audioDecoderOutputBufferInfo = MediaCodec.BufferInfo()
    audioEncoderOutputBufferInfo = MediaCodec.BufferInfo()
    var encoderOutputVideoFormat : MediaFormat? = null
    var encoderOutputAudioFormat : MediaFormat? = null
    var outputVideoTrack = -1
    var outputAudioTrack = -1
    var videoExtractorDone = false
    var videoDecoderDone = false
    var videoEncoderDone = false
    var audioExtractorDone = false
    var audioDecoderDone = false
    var audioEncoderDone = false
    var pendingAudioDecoderOutputBufferIndex = -1
    var muxing = false

    while (!videoEncoderDone || !audioEncoderDone) 

        while (!videoExtractorDone
            && (encoderOutputVideoFormat == null || muxing)
        ) 

            val decoderInputBufferIndex = videoDecoder.dequeueInputBuffer(TIMEOUT_USEC.toLong())

            if (decoderInputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER)  

                break

            

            val decoderInputBuffer: ByteBuffer? =
                videoDecoderInputBuffers[decoderInputBufferIndex]

            val size = decoderInputBuffer?.let  videoExtractor!!.readSampleData(it, 0) 

            val presentationTime = videoExtractor?.sampleTime

            if (presentationTime != null) 

                if (size != null) 

                    if (size >= 0) 

                        if (videoExtractor != null) 

                            videoDecoder.queueInputBuffer(
                                decoderInputBufferIndex,
                                0,
                                size,
                                presentationTime,
                                videoExtractor.sampleFlags
                            )

                        

                    

                

            

            if (videoExtractor != null) 

                videoExtractorDone = (!videoExtractor.advance() && size == -1)

            

            if (videoExtractorDone) 

                videoDecoder.queueInputBuffer(
                    decoderInputBufferIndex,
                    0,
                    0,
                    0,
                    MediaCodec.BUFFER_FLAG_END_OF_STREAM
                )

            

            break

        

        while (!audioExtractorDone
            && (encoderOutputAudioFormat == null || muxing)
        ) 

            val decoderInputBufferIndex = audioDecoder.dequeueInputBuffer(TIMEOUT_USEC.toLong())

            if (decoderInputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER) 

                break

            

            val decoderInputBuffer: ByteBuffer? =
                audioDecoderInputBuffers[decoderInputBufferIndex]

            val size = decoderInputBuffer?.let  audioExtractor!!.readSampleData(it, 0) 

            val presentationTime = audioExtractor?.sampleTime

            if (presentationTime != null) 

                if (size != null) 

                    if (size >= 0) 

                        audioDecoder.queueInputBuffer(
                            decoderInputBufferIndex,
                            0,
                            size,
                            presentationTime,
                            audioExtractor.sampleFlags
                        )

                    

                

            

            if (audioExtractor != null)     
                audioExtractorDone = (!audioExtractor.advance() && size == -1)
            

            if (audioExtractorDone) 

                audioDecoder.queueInputBuffer(
                    decoderInputBufferIndex,
                    0,
                    0,
                    0,
                    MediaCodec.BUFFER_FLAG_END_OF_STREAM
                )

            

            break

        

        while (!videoDecoderDone
            && (encoderOutputVideoFormat == null || muxing)
        ) 

            val decoderOutputBufferIndex = videoDecoder.dequeueOutputBuffer(
                videoDecoderOutputBufferInfo, TIMEOUT_USEC.toLong()
            )

            if (decoderOutputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER) 

                break

            

            if (decoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) 

                videoDecoderOutputBuffers = videoDecoder.outputBuffers

                break

            

            if (decoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) 

                decoderOutputVideoFormat = videoDecoder.outputFormat

                break

            

            val decoderOutputBuffer: ByteBuffer? =
                videoDecoderOutputBuffers!![decoderOutputBufferIndex]


            if (videoDecoderOutputBufferInfo.flags and MediaCodec.BUFFER_FLAG_CODEC_CONFIG
                != 0
            ) 

                videoDecoder.releaseOutputBuffer(decoderOutputBufferIndex, false)

                break

            

            val render = videoDecoderOutputBufferInfo.size != 0

            videoDecoder.releaseOutputBuffer(decoderOutputBufferIndex, render)

            if (render) 

                if (outputSurface != null) 

                    outputSurface.awaitNewImage()
                    outputSurface.drawImage()

                

                if (inputSurface != null) 

                    inputSurface.setPresentationTime(
                        videoDecoderOutputBufferInfo.presentationTimeUs * 1000
                    )
                    inputSurface.swapBuffers()

                

            

            if ((videoDecoderOutputBufferInfo.flags
                        and MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0
            ) 

                videoDecoderDone = true
                videoEncoder.signalEndOfInputStream()

            

            break

        

        while (!audioDecoderDone && pendingAudioDecoderOutputBufferIndex == -1 && (encoderOutputAudioFormat == null || muxing)) 

            val decoderOutputBufferIndex = audioDecoder.dequeueOutputBuffer(
                audioDecoderOutputBufferInfo, TIMEOUT_USEC.toLong()
            )

            if (decoderOutputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER) 

                break

            

            if (decoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) 

                audioDecoderOutputBuffers = audioDecoder.outputBuffers
                break

            

            if (decoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) 

                decoderOutputAudioFormat = audioDecoder.outputFormat
                break
            

            val decoderOutputBuffer: ByteBuffer =
                audioDecoderOutputBuffers!![decoderOutputBufferIndex]

            if (audioDecoderOutputBufferInfo.flags and MediaCodec.BUFFER_FLAG_CODEC_CONFIG
                != 0
            ) 

                audioDecoder.releaseOutputBuffer(decoderOutputBufferIndex, false)

                break
            

            pendingAudioDecoderOutputBufferIndex = decoderOutputBufferIndex

            break

        

        while (pendingAudioDecoderOutputBufferIndex != -1) 

            val encoderInputBufferIndex = audioEncoder.dequeueInputBuffer(TIMEOUT_USEC.toLong())

            val encoderInputBuffer: ByteBuffer =
                audioEncoderInputBuffers[encoderInputBufferIndex]

            val size = audioDecoderOutputBufferInfo.size

            val presentationTime = audioDecoderOutputBufferInfo.presentationTimeUs

            if (size >= 0) 

                val decoderOutputBuffer: ByteBuffer =
                    audioDecoderOutputBuffers!![pendingAudioDecoderOutputBufferIndex]
                        .duplicate()

                decoderOutputBuffer.position(audioDecoderOutputBufferInfo.offset)
                decoderOutputBuffer.limit(audioDecoderOutputBufferInfo.offset + size)
                encoderInputBuffer.position(0)
                encoderInputBuffer.put(decoderOutputBuffer)

                audioEncoder.queueInputBuffer(
                    encoderInputBufferIndex,
                    0,
                    size,
                    presentationTime,
                    audioDecoderOutputBufferInfo.flags
                )

            

            audioDecoder.releaseOutputBuffer(pendingAudioDecoderOutputBufferIndex, false)
            pendingAudioDecoderOutputBufferIndex = -1

            if ((audioDecoderOutputBufferInfo.flags
                        and MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0
            ) audioDecoderDone = true

            break

        

        while (!videoEncoderDone
            && (encoderOutputVideoFormat == null || muxing)
        ) 

            val encoderOutputBufferIndex = videoEncoder.dequeueOutputBuffer(
                videoEncoderOutputBufferInfo, TIMEOUT_USEC.toLong()
            )

            if (encoderOutputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER) break

            if (encoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) 
                videoEncoderOutputBuffers = videoEncoder.outputBuffers
                break
            

            if (encoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) 
                encoderOutputVideoFormat = videoEncoder.outputFormat
                break
            

            val encoderOutputBuffer: ByteBuffer? =
                videoEncoderOutputBuffers!![encoderOutputBufferIndex]

            if (videoEncoderOutputBufferInfo.flags and MediaCodec.BUFFER_FLAG_CODEC_CONFIG
                != 0
            ) 
                videoEncoder.releaseOutputBuffer(encoderOutputBufferIndex, false)
                break
            

            if (videoEncoderOutputBufferInfo.size != 0) 
                if (encoderOutputBuffer != null) 
                    muxer.writeSampleData(
                        outputVideoTrack, encoderOutputBuffer, videoEncoderOutputBufferInfo
                    )
                
            

            if (videoEncoderOutputBufferInfo.flags and MediaCodec.BUFFER_FLAG_END_OF_STREAM
                != 0
            ) 
                videoEncoderDone = true
            

            videoEncoder.releaseOutputBuffer(encoderOutputBufferIndex, false)

            break

        

        while (!audioEncoderDone
            && (encoderOutputAudioFormat == null || muxing)
        ) 

            val encoderOutputBufferIndex = audioEncoder.dequeueOutputBuffer(
                audioEncoderOutputBufferInfo, TIMEOUT_USEC.toLong()
            )

            if (encoderOutputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER) 
                break
            

            if (encoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) 
                audioEncoderOutputBuffers = audioEncoder.outputBuffers
                break
            

            if (encoderOutputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) 
                encoderOutputAudioFormat = audioEncoder.outputFormat
                break
            

            val encoderOutputBuffer: ByteBuffer? =
                audioEncoderOutputBuffers!![encoderOutputBufferIndex]

            if (audioEncoderOutputBufferInfo.flags and MediaCodec.BUFFER_FLAG_CODEC_CONFIG
                != 0
            ) 

                audioEncoder.releaseOutputBuffer(encoderOutputBufferIndex, false)

                break

            

            if (audioEncoderOutputBufferInfo.size != 0) encoderOutputBuffer?.let 

                muxer.writeSampleData(
                    outputAudioTrack, it, audioEncoderOutputBufferInfo
                )

            

            if (audioEncoderOutputBufferInfo.flags and MediaCodec.BUFFER_FLAG_END_OF_STREAM
                != 0
            ) audioEncoderDone = true

            audioEncoder.releaseOutputBuffer(encoderOutputBufferIndex, false)

            break

        

        if (!muxing && encoderOutputAudioFormat != null
            && encoderOutputVideoFormat != null
        ) 

            outputVideoTrack = muxer.addTrack(encoderOutputVideoFormat)
            outputAudioTrack = muxer.addTrack(encoderOutputAudioFormat)
            muxer.start()
            muxing = true

        

    



private fun isVideoFormat(format: MediaFormat): Boolean 

    return getMimeTypeFor(format)!!.startsWith("video/")


private fun isAudioFormat(format: MediaFormat): Boolean 

    return getMimeTypeFor(format)!!.startsWith("audio/")


private fun getMimeTypeFor(format: MediaFormat): String? 

    return format.getString(MediaFormat.KEY_MIME)


private fun selectCodec(mimeType: String): MediaCodecInfo? 

    val numCodecs = MediaCodecList.getCodecCount()

    for (i in 0 until numCodecs) 

        val codecInfo = MediaCodecList.getCodecInfoAt(i)

        if (!codecInfo.isEncoder) 
            continue
        

        val types = codecInfo.supportedTypes

        for (j in types.indices) 
            if (types[j].equals(mimeType, ignoreCase = true)) 
                return codecInfo
            
        

    

    return null




【讨论】:

您需要将 mOutputFile 行更改为 getExternalFilesDir(Environment.DIRECTORY_DOWNLOADS) 中的文件路径。我的字符用完了。 使用后台线程调用: val progressThread = Thread ( Runnable kotlin.run val outputFile = VideoResolutionChanger().changeResolution(File(videoUri!!.path!!)) println("outputFile: $outputFile") ) progressThread.start()【参考方案7】:

OutputSurface.kt

/*
 * Copyright (C) 2013 The Android Open Source Project
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

/**
 * Holds state associated with a Surface used for MediaCodec decoder output.
 *
 *
 * The (width,height) constructor for this class will prepare GL, create a SurfaceTexture,
 * and then create a Surface for that SurfaceTexture.  The Surface can be passed to
 * MediaCodec.configure() to receive decoder output.  When a frame arrives, we latch the
 * texture with updateTexImage, then render the texture with GL to a pbuffer.
 *
 *
 * The no-arg constructor skips the GL preparation step and doesn't allocate a pbuffer.
 * Instead, it just creates the Surface and SurfaceTexture, and when a frame arrives
 * we just draw it on whatever surface is current.
 *
 *
 * By default, the Surface will be using a BufferQueue in asynchronous mode, so we
 * can potentially drop frames.
 */
internal class OutputSurface : OnFrameAvailableListener 
    private var mEGLDisplay = EGL14.EGL_NO_DISPLAY
    private var mEGLContext = EGL14.EGL_NO_CONTEXT
    private var mEGLSurface = EGL14.EGL_NO_SURFACE
    private var mSurfaceTexture: SurfaceTexture? = null

    /**
     * Returns the Surface that we draw onto.
     */
    var surface: Surface? = null
        private set
    private val mFrameSyncObject = Object() // guards mFrameAvailable
    private var mFrameAvailable = false
    private var mTextureRender: TextureRender? = null

    /**
     * Creates an OutputSurface backed by a pbuffer with the specifed dimensions.  The new
     * EGL context and surface will be made current.  Creates a Surface that can be passed
     * to MediaCodec.configure().
     */
    constructor(width: Int, height: Int) 
        println("OutputSurface constructor width: $width height: $height")
        require(!(width <= 0 || height <= 0))
        eglSetup(width, height)
        makeCurrent()
        setup()
    

    /**
     * Creates an OutputSurface using the current EGL context (rather than establishing a
     * new one).  Creates a Surface that can be passed to MediaCodec.configure().
     */
    constructor() 
        println("OutputSurface constructor")
        setup()
    

    /**
     * Creates instances of TextureRender and SurfaceTexture, and a Surface associated
     * with the SurfaceTexture.
     */
    private fun setup() 
        println("OutputSurface setup")
        mTextureRender = TextureRender()
        mTextureRender!!.surfaceCreated()
        // Even if we don't access the SurfaceTexture after the constructor returns, we
        // still need to keep a reference to it.  The Surface doesn't retain a reference
        // at the Java level, so if we don't either then the object can get GCed, which
        // causes the native finalizer to run.
        if (VERBOSE) Log.d(TAG, "textureID=" + mTextureRender!!.textureId)
        mSurfaceTexture = SurfaceTexture(mTextureRender!!.textureId)
        // This doesn't work if OutputSurface is created on the thread that CTS started for
        // these test cases.
        //
        // The CTS-created thread has a Looper, and the SurfaceTexture constructor will
        // create a Handler that uses it.  The "frame available" message is delivered
        // there, but since we're not a Looper-based thread we'll never see it.  For
        // this to do anything useful, OutputSurface must be created on a thread without
        // a Looper, so that SurfaceTexture uses the main application Looper instead.
        //
        // Java language note: passing "this" out of a constructor is generally unwise,
        // but we should be able to get away with it here.
        mSurfaceTexture!!.setOnFrameAvailableListener(this)
        surface = Surface(mSurfaceTexture)
    

    /**
     * Prepares EGL.  We want a GLES 2.0 context and a surface that supports pbuffer.
     */
    private fun eglSetup(width: Int, height: Int) 
        mEGLDisplay = EGL14.eglGetDisplay(EGL14.EGL_DEFAULT_DISPLAY)
        if (mEGLDisplay === EGL14.EGL_NO_DISPLAY) 
            throw RuntimeException("unable to get EGL14 display")
        
        val version = IntArray(2)
        if (!EGL14.eglInitialize(mEGLDisplay, version, 0, version, 1)) 
            mEGLDisplay = null
            throw RuntimeException("unable to initialize EGL14")
        
        // Configure EGL for pbuffer and OpenGL ES 2.0.  We want enough RGB bits
        // to be able to tell if the frame is reasonable.
        val attribList = intArrayOf(
            EGL14.EGL_RED_SIZE, 8,
            EGL14.EGL_GREEN_SIZE, 8,
            EGL14.EGL_BLUE_SIZE, 8,
            EGL14.EGL_RENDERABLE_TYPE, EGL14.EGL_OPENGL_ES2_BIT,
            EGL14.EGL_SURFACE_TYPE, EGL14.EGL_PBUFFER_BIT,
            EGL14.EGL_NONE
        )
        val configs = arrayOfNulls<EGLConfig>(1)
        val numConfigs = IntArray(1)
        if (!EGL14.eglChooseConfig(
                mEGLDisplay, attribList, 0, configs, 0, configs.size,
                numConfigs, 0
            )
        ) 
            throw RuntimeException("unable to find RGB888+recordable ES2 EGL config")
        
        // Configure context for OpenGL ES 2.0.
        val attrib_list = intArrayOf(
            EGL14.EGL_CONTEXT_CLIENT_VERSION, 2,
            EGL14.EGL_NONE
        )
        mEGLContext = EGL14.eglCreateContext(
            mEGLDisplay, configs[0], EGL14.EGL_NO_CONTEXT,
            attrib_list, 0
        )
        checkEglError("eglCreateContext")
        if (mEGLContext == null) 
            throw RuntimeException("null context")
        
        // Create a pbuffer surface.  By using this for output, we can use glReadPixels
        // to test values in the output.
        val surfaceAttribs = intArrayOf(
            EGL14.EGL_WIDTH, width,
            EGL14.EGL_HEIGHT, height,
            EGL14.EGL_NONE
        )
        mEGLSurface = EGL14.eglCreatePbufferSurface(mEGLDisplay, configs[0], surfaceAttribs, 0)
        checkEglError("eglCreatePbufferSurface")
        if (mEGLSurface == null) 
            throw RuntimeException("surface was null")
        
    

    /**
     * Discard all resources held by this class, notably the EGL context.
     */
    fun release() 
        if (mEGLDisplay !== EGL14.EGL_NO_DISPLAY) 
            EGL14.eglDestroySurface(mEGLDisplay, mEGLSurface)
            EGL14.eglDestroyContext(mEGLDisplay, mEGLContext)
            EGL14.eglReleaseThread()
            EGL14.eglTerminate(mEGLDisplay)
        
        surface!!.release()
        // this causes a bunch of warnings that appear harmless but might confuse someone:
        //  W BufferQueue: [unnamed-3997-2] cancelBuffer: BufferQueue has been abandoned!
        //mSurfaceTexture.release();
        mEGLDisplay = EGL14.EGL_NO_DISPLAY
        mEGLContext = EGL14.EGL_NO_CONTEXT
        mEGLSurface = EGL14.EGL_NO_SURFACE
        mTextureRender = null
        surface = null
        mSurfaceTexture = null
    

    /**
     * Makes our EGL context and surface current.
     */
    private fun makeCurrent() 
        if (!EGL14.eglMakeCurrent(mEGLDisplay, mEGLSurface, mEGLSurface, mEGLContext)) 
            throw RuntimeException("eglMakeCurrent failed")
        
    

    /**
     * Replaces the fragment shader.
     */
    fun changeFragmentShader(fragmentShader: String?) 
        if (fragmentShader != null) 
            mTextureRender?.changeFragmentShader(fragmentShader)
        
    

    /**
     * Latches the next buffer into the texture.  Must be called from the thread that created
     * the OutputSurface object, after the onFrameAvailable callback has signaled that new
     * data is available.
     */

    fun awaitNewImage() 

        //println("awaitNewImage")

        val TIMEOUT_MS = 500

        synchronized(mFrameSyncObject) 

            while (!mFrameAvailable) 

                try 

                    // Wait for onFrameAvailable() to signal us.  Use a timeout to avoid
                    // stalling the test if it doesn't arrive.

                    mFrameSyncObject.wait(TIMEOUT_MS.toLong())

                    if (!mFrameAvailable) 
                        // TODO: if "spurious wakeup", continue while loop
                        //throw RuntimeException("Surface frame wait timed out")
                    

                 catch (ie: InterruptedException) 

                    // shouldn't happen
                    throw RuntimeException(ie)

                

            

            mFrameAvailable = false

        

        // Latch the data.

        mTextureRender?.checkGlError("before updateTexImage")
        mSurfaceTexture!!.updateTexImage()

    

    /**
     * Draws the data from SurfaceTexture onto the current EGL surface.
     */
    fun drawImage() 
        mSurfaceTexture?.let  mTextureRender?.drawFrame(it) 
    

    override fun onFrameAvailable(st: SurfaceTexture) 
        //println("onFrameAvailable")
        if (VERBOSE) Log.d(TAG, "new frame available")
        synchronized(mFrameSyncObject) 
            if (mFrameAvailable) 
                throw RuntimeException("mFrameAvailable already set, frame could be dropped")
            
            mFrameAvailable = true
            mFrameSyncObject.notifyAll()
        
    

    /**
     * Checks for EGL errors.
     */
    private fun checkEglError(msg: String) 
        var error: Int
        if (EGL14.eglGetError().also  error = it  != EGL14.EGL_SUCCESS) 
            throw RuntimeException(msg + ": EGL error: 0x" + Integer.toHexString(error))
        
    

    companion object 
        private const val TAG = "OutputSurface"
        private const val VERBOSE = false
    


【讨论】:

【参考方案8】:

InputSurface.kt

/*
 * Copyright (C) 2013 The Android Open Source Project
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

/**
 * Holds state associated with a Surface used for MediaCodec encoder input.
 *
 *
 * The constructor takes a Surface obtained from MediaCodec.createInputSurface(), and uses that
 * to create an EGL window surface.  Calls to eglSwapBuffers() cause a frame of data to be sent
 * to the video encoder.
 */
internal class InputSurface(surface: Surface?) 
    private var mEGLDisplay = EGL14.EGL_NO_DISPLAY
    private var mEGLContext = EGL14.EGL_NO_CONTEXT
    private var mEGLSurface = EGL14.EGL_NO_SURFACE

    /**
     * Returns the Surface that the MediaCodec receives buffers from.
     */
    var surface: Surface?
        private set

    /**
     * Prepares EGL.  We want a GLES 2.0 context and a surface that supports recording.
     */
    private fun eglSetup() 
        mEGLDisplay = EGL14.eglGetDisplay(EGL14.EGL_DEFAULT_DISPLAY)
        if (mEGLDisplay === EGL14.EGL_NO_DISPLAY) 
            throw RuntimeException("unable to get EGL14 display")
        
        val version = IntArray(2)
        if (!EGL14.eglInitialize(mEGLDisplay, version, 0, version, 1)) 
            mEGLDisplay = null
            throw RuntimeException("unable to initialize EGL14")
        
        // Configure EGL for recordable and OpenGL ES 2.0.  We want enough RGB bits
        // to minimize artifacts from possible YUV conversion.
        val attribList = intArrayOf(
            EGL14.EGL_RED_SIZE, 8,
            EGL14.EGL_GREEN_SIZE, 8,
            EGL14.EGL_BLUE_SIZE, 8,
            EGL14.EGL_RENDERABLE_TYPE, EGL14.EGL_OPENGL_ES2_BIT,
            EGL_RECORDABLE_ANDROID, 1,
            EGL14.EGL_NONE
        )
        val configs = arrayOfNulls<EGLConfig>(1)
        val numConfigs = IntArray(1)
        if (!EGL14.eglChooseConfig(
                mEGLDisplay, attribList, 0, configs, 0, configs.size,
                numConfigs, 0
            )
        ) 
            throw RuntimeException("unable to find RGB888+recordable ES2 EGL config")
        
        // Configure context for OpenGL ES 2.0.
        val attrib_list = intArrayOf(
            EGL14.EGL_CONTEXT_CLIENT_VERSION, 2,
            EGL14.EGL_NONE
        )
        mEGLContext = EGL14.eglCreateContext(
            mEGLDisplay, configs[0], EGL14.EGL_NO_CONTEXT,
            attrib_list, 0
        )
        checkEglError("eglCreateContext")
        if (mEGLContext == null) 
            throw RuntimeException("null context")
        
        // Create a window surface, and attach it to the Surface we received.
        val surfaceAttribs = intArrayOf(
            EGL14.EGL_NONE
        )
        mEGLSurface = EGL14.eglCreateWindowSurface(
            mEGLDisplay, configs[0], surface,
            surfaceAttribs, 0
        )
        checkEglError("eglCreateWindowSurface")
        if (mEGLSurface == null) 
            throw RuntimeException("surface was null")
        
    

    /**
     * Discard all resources held by this class, notably the EGL context.  Also releases the
     * Surface that was passed to our constructor.
     */
    fun release() 
        if (mEGLDisplay !== EGL14.EGL_NO_DISPLAY) 
            EGL14.eglDestroySurface(mEGLDisplay, mEGLSurface)
            EGL14.eglDestroyContext(mEGLDisplay, mEGLContext)
            EGL14.eglReleaseThread()
            EGL14.eglTerminate(mEGLDisplay)
        
        surface!!.release()
        mEGLDisplay = EGL14.EGL_NO_DISPLAY
        mEGLContext = EGL14.EGL_NO_CONTEXT
        mEGLSurface = EGL14.EGL_NO_SURFACE
        surface = null
    

    /**
     * Makes our EGL context and surface current.
     */
    fun makeCurrent() 
        if (!EGL14.eglMakeCurrent(mEGLDisplay, mEGLSurface, mEGLSurface, mEGLContext)) 
            throw RuntimeException("eglMakeCurrent failed")
        
    

    fun makeUnCurrent() 
        if (!EGL14.eglMakeCurrent(
                mEGLDisplay, EGL14.EGL_NO_SURFACE, EGL14.EGL_NO_SURFACE,
                EGL14.EGL_NO_CONTEXT
            )
        ) 
            throw RuntimeException("eglMakeCurrent failed")
        
    

    /**
     * Calls eglSwapBuffers.  Use this to "publish" the current frame.
     */
    fun swapBuffers(): Boolean 
        //println("swapBuffers")
        return EGL14.eglSwapBuffers(mEGLDisplay, mEGLSurface)
    

    /**
     * Queries the surface's width.
     */
    val width: Int
        get() 
            val value = IntArray(1)
            EGL14.eglQuerySurface(mEGLDisplay, mEGLSurface, EGL14.EGL_WIDTH, value, 0)
            return value[0]
        

    /**
     * Queries the surface's height.
     */
    val height: Int
        get() 
            val value = IntArray(1)
            EGL14.eglQuerySurface(mEGLDisplay, mEGLSurface, EGL14.EGL_HEIGHT, value, 0)
            return value[0]
        

    /**
     * Sends the presentation time stamp to EGL.  Time is expressed in nanoseconds.
     */
    fun setPresentationTime(nsecs: Long) 
        EGLExt.eglPresentationTimeANDROID(mEGLDisplay, mEGLSurface, nsecs)
    

    /**
     * Checks for EGL errors.
     */
    private fun checkEglError(msg: String) 
        var error: Int
        if (EGL14.eglGetError().also  error = it  != EGL14.EGL_SUCCESS) 
            throw RuntimeException(msg + ": EGL error: 0x" + Integer.toHexString(error))
        
    

    companion object 
        private const val TAG = "InputSurface"
        private const val VERBOSE = false
        private const val EGL_RECORDABLE_ANDROID = 0x3142
    

    /**
     * Creates an InputSurface from a Surface.
     */
    init 
        if (surface == null) 
            throw NullPointerException()
        
        this.surface = surface
        eglSetup()
    

【讨论】:

以上是关于MediaMuxer 视频文件大小减小(重新压缩,降低分辨率)的主要内容,如果未能解决你的问题,请参考以下文章

通过重新压缩其内容来减小 APK 大小

减小相同格式的视频大小并减小帧大小

使用 MediaCodec 和 MediaMuxer 编码和混合视频

zip -9 压缩不会减小文件大小

如何在android中以编程方式上传到服务器之前减小视频的大小[关闭]

浅析如何减小iOS版微信安装包的大小