iOS 将立体声 mp3 拆分为单声道 aac

Posted

技术标签:

【中文标题】iOS 将立体声 mp3 拆分为单声道 aac【英文标题】:iOS split stereo mp3 to mono aac 【发布时间】:2016-02-14 12:16:46 【问题描述】:

我正在使用以下代码在 ios 上将 mp3 转换为 m4a:iOS swift convert mp3 to aac

但我需要将左右声道提取到单独的 m4a 文件中。

我的这段代码正在工作,它将我的音频分割成 nsdata:

let leftdata:NSMutableData! = NSMutableData()
let rightdata:NSMutableData! = NSMutableData()

let buff: CMBlockBufferRef = CMSampleBufferGetDataBuffer(sampleBuffer!)!

var lengthAtOffset: size_t = 0
var totalLength:Int = 0
var data: UnsafeMutablePointer<Int8> = nil

if( CMBlockBufferGetDataPointer( buff, 0, &lengthAtOffset, &totalLength, &data ) != noErr ) 
    print("some sort of error happened")
 else 

    for i in 0.stride(to: totalLength, by: 2) 

        if(i % 4 == 0) 
            leftdata.appendBytes(data+i, length: 2)
         else 
            rightdata.appendBytes(data+i, length: 2)
        

    


data = nil

但是现在我需要将它转换为 CMSampleBuffer,以便我可以附加到资产编写器。如何将 nsdata 转换为样本缓冲区?

11 月 24 日更新 我现在有以下代码,它试图将 NSData 转换为 CMSampleBuffer。我不知道它失败的地方:

var dataPointer: UnsafeMutablePointer<Void> = UnsafeMutablePointer(leftdata.bytes)

var cmblockbufferref:CMBlockBufferRef?

var status = CMBlockBufferCreateWithMemoryBlock(nil, dataPointer, leftdata.length, kCFAllocatorNull, nil, 0, leftdata.length, 0, &cmblockbufferref)

var audioFormat:AudioStreamBasicDescription = AudioStreamBasicDescription()
audioFormat.mSampleRate = 44100
audioFormat.mFormatID = kAudioFormatLinearPCM
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagsNativeEndian
audioFormat.mBytesPerPacket = 2
audioFormat.mFramesPerPacket = 1
audioFormat.mBytesPerFrame = 2
audioFormat.mChannelsPerFrame = 1
audioFormat.mBitsPerChannel = 16
audioFormat.mReserved = 0

var format:CMFormatDescriptionRef?

status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &audioFormat, 0, nil, 0, nil, nil, &format);

var timing:CMSampleTimingInfo = CMSampleTimingInfo(duration: CMTimeMake(1, 44100), presentationTimeStamp: kCMTimeZero, decodeTimeStamp: kCMTimeInvalid)

var leftSampleBuffer:CMSampleBufferRef?

status = CMSampleBufferCreate(kCFAllocatorDefault, cmblockbufferref, true, nil, nil, format, leftdata.length, 1, &timing, 0, nil, &leftSampleBuffer)

self.assetWriterAudioInput.appendSampleBuffer(leftSampleBuffer!)

【问题讨论】:

我假设 CMBlockBuffer 将两个声道音频(立体声)存储在一维数组中,其中数据存储如下:[leftChannel_sample0, rightChannel_sample0, leftChannel_sample1...]? 【参考方案1】:

我们终于让它工作了!这是我们用来将 nsdata 转换为 samplebuffer 的最终 swift 代码:

func NSDataToSample(data:NSData) -> CMSampleBufferRef? 

    var cmBlockBufferRef:CMBlockBufferRef?

    var status = CMBlockBufferCreateWithMemoryBlock(nil, nil, data.length, nil, nil, 0, data.length, 0, &cmBlockBufferRef)

    if(status != 0) 
        return nil
    

    status = CMBlockBufferReplaceDataBytes(data.bytes, cmBlockBufferRef!, 0, data.length)

    if(status != 0) 
        return nil
    

    var audioFormat:AudioStreamBasicDescription = AudioStreamBasicDescription()

    audioFormat.mSampleRate = 44100
    audioFormat.mFormatID = kAudioFormatLinearPCM
    audioFormat.mFormatFlags = 0xc
    audioFormat.mBytesPerPacket = 2
    audioFormat.mFramesPerPacket = 1
    audioFormat.mBytesPerFrame = 2
    audioFormat.mChannelsPerFrame = 1
    audioFormat.mBitsPerChannel = 16
    audioFormat.mReserved = 0

    var format:CMFormatDescriptionRef?

    status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &audioFormat, 0, nil, 0, nil, nil, &format)

    if(status != 0) 
        return nil
    

    var sampleBuffer:CMSampleBufferRef?

    status = CMSampleBufferCreate(kCFAllocatorDefault, cmBlockBufferRef!, true, nil, nil, format,  data.length/2, 0, nil, 0, nil, &sampleBuffer)

    if(status != 0) 
        return nil
    

    return sampleBuffer

【讨论】:

以上是关于iOS 将立体声 mp3 拆分为单声道 aac的主要内容,如果未能解决你的问题,请参考以下文章

Python将立体声.flac转换为单声道

我是不是需要将立体声音频转换为单声道以进行 FFT?

将 .caf 文件从立体声转换为单声道

在Javascript中将立体声音频转换为单声道

将wav文件转换为单声道

是否可以在不重新编码的情况下将立体声合并/缩混为单声道(m4a 或 opus)