用于音频测量的 Swift AVFoundation 计时信息

Posted

技术标签:

【中文标题】用于音频测量的 Swift AVFoundation 计时信息【英文标题】:Swift AVFoundation timing info for audio measurements 【发布时间】:2019-08-11 10:51:06 【问题描述】:

我正在创建一个应用程序,它将通过播放一些激励数据并记录麦克风输入来进行音频测量,然后分析数据。

我无法计算初始化和启动音频引擎所花费的时间,因为这每次都不同,并且还取决于所使用的硬件等。

所以,我有一个音频引擎并安装了 Tap the hardware input,输入 1 是麦克风录音,输入 2 是参考输入(也来自硬件)。输出在物理上是 Y-Split 并反馈到输入 2。

应用程序初始化引擎,播放刺激音频加上 1 秒的静音(以留出传播时间让麦克风记录整个信号),然后停止并关闭引擎。

我将两个输入缓冲区写为一个 WAV 文件,这样我就可以将它导入到现有的 DAW 中。目视检查信号。我可以看到每次我进行测量时,两个信号之间的时间差都是不同的(尽管麦克风没有移动并且硬件保持不变)。我假设这与硬件的延迟、初始化引擎所需的时间以及设备分配任务的方式有关。

我尝试在每个 installTap 函数上使用第一个缓冲区回调的 mach_absolute_time 来捕获绝对时间,然后减去两者,我可以看到每次调用都会有很大的不同:

class newAVAudioEngine

    var engine = AVAudioEngine()
    var audioBuffer = AVAudioPCMBuffer()
    var running = true
    var in1Buf:[Float]=Array(repeating:0, count:totalRecordSize)
    var in2Buf:[Float]=Array(repeating:0, count:totalRecordSize)
    var buf1current:Int = 0
    var buf2current:Int = 0
    var in1firstRun:Bool = false
    var in2firstRun:Bool = false
    var in1StartTime = 0
    var in2startTime = 0

    func measure(inputSweep:SweepFilter) -> measurement 
        initializeEngine(inputSweep: inputSweep)
        while running == true 

        
        let measureResult = measurement.init(meas: meas,ref: ref)
        return measureResult
    

    func initializeEngine(inputSweep:SweepFilter)  
        buf1current = 0
        buf2current = 0
        in1StartTime = 0
        in2startTime = 0
        in1firstRun = true
        in2firstRun = true
        in1Buf = Array(repeating:0, count:totalRecordSize)
        in2Buf = Array(repeating:0, count:totalRecordSize)
        engine.stop()
        engine.reset()
        engine = AVAudioEngine()

        let srcNode = AVAudiosourceNode  _, _, frameCount, AudioBufferList -> OSStatus in

            let ablPointer = UnsafeMutableAudioBufferListPointer(AudioBufferList)

            if (Int(frameCount) + time) <= inputSweep.stimulus.count 

                for frame in 0..<Int(frameCount) 
                let value = inputSweep.stimulus[frame + time]
                    for buffer in ablPointer 
                        let buf: UnsafeMutableBufferPointer<Float> = UnsafeMutableBufferPointer(buffer)
                        buf[frame] = value
                    
                
                time += Int(frameCount)
                return noErr
             else 
                for frame in 0..<Int(frameCount) 
                    let value = 0
                    for buffer in ablPointer 
                        let buf: UnsafeMutableBufferPointer<Float> = UnsafeMutableBufferPointer(buffer)
                        buf[frame] = Float(value)
                    
                
            
            return noErr
        

        let format = engine.outputNode.inputFormat(forBus: 0)
        let stimulusFormat = AVAudioFormat(commonFormat: format.commonFormat,
        sampleRate: Double(sampleRate),
        channels: 1,
        interleaved: format.isInterleaved)

        do 
            try AVAudioSession.sharedInstance().setCategory(.playAndRecord)

            let ioBufferDuration = 128.0 / 44100.0

            try AVAudioSession.sharedInstance().setPreferredIOBufferDuration(ioBufferDuration)
         catch 
            assertionFailure("AVAudioSession setup failed")
        

    let input = engine.inputNode
    let inputFormat = input.inputFormat(forBus: 0)

    print("InputNode Format is \(inputFormat)")
    engine.attach(srcNode)
    engine.connect(srcNode, to: engine.mainMixerNode, format: stimulusFormat)

    if internalRefLoop == true 
        srcNode.installTap(onBus: 0, bufferSize: 1024, format: stimulusFormat, block: (buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in
            if self.in2firstRun == true 
                var info = mach_timebase_info()
                mach_timebase_info(&info)
                let currentTime = mach_absolute_time()
                let nanos = currentTime * UInt64(info.numer) / UInt64(info.denom)
                self.in2startTime = Int(nanos)
                self.in2firstRun = false
            
            do 
                let floatData = buffer.floatChannelData?.pointee
                for frame in 0..<buffer.frameLength
                    if (self.buf2current + Int(frame)) < totalRecordSize
                        self.in2Buf[self.buf2current + Int(frame)] = floatData![Int(frame)]
                    
                


                self.buf2current += Int(buffer.frameLength)
                if (self.numberOfSamples + Int(buffer.frameLength)) <= totalRecordSize
                    try self.stimulusFile.write(from: buffer)
                    self.numberOfSamples += Int(buffer.frameLength)                 else 
                    self.engine.stop()
                    self.running = false
                
             catch 
                print(NSString(string: "write failed"))
            
        )
    


    let micAudioConverter = AVAudioConverter(from: inputFormat, to: stimulusFormat!)
    var micChannelMap:[NSNumber] = [0,-1]
    micAudioConverter?.channelMap = micChannelMap

    let refAudioConverter = AVAudioConverter(from: inputFormat, to: stimulusFormat!)
    var refChannelMap:[NSNumber] = [1,-1]
    refAudioConverter?.channelMap = refChannelMap



    //Measurement Tap
    engine.inputNode.installTap(onBus: 0, bufferSize: 1024, format: inputFormat, block: (buffer2: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in
        //print(NSString(string:"writing"))

        if self.in1firstRun == true 
            var info = mach_timebase_info()
            mach_timebase_info(&info)
            let currentTime = mach_absolute_time()
            let nanos = currentTime * UInt64(info.numer) / UInt64(info.denom)
            self.in1StartTime = Int(nanos)
            self.in1firstRun = false
        
        do 
            let micConvertedBuffer = AVAudioPCMBuffer(pcmFormat: stimulusFormat!, frameCapacity: buffer2.frameCapacity)
            let micInputBlock: AVAudioConverterInputBlock =  inNumPackets, outStatus in
                outStatus.pointee = AVAudioConverterInputStatus.haveData
                return buffer2
            
            var error: NSError? = nil
            //let status = audioConverter.convert(to: convertedBuffer!, error: &error, withInputFrom: inputBlock)



            let status =  micAudioConverter?.convert(to: micConvertedBuffer!, error: &error, withInputFrom: micInputBlock)
            //print(status)
            let floatData = micConvertedBuffer?.floatChannelData?.pointee
            for frame in 0..<micConvertedBuffer!.frameLength
                if (self.buf1current + Int(frame)) < totalRecordSize
                self.in1Buf[self.buf1current + Int(frame)] = floatData![Int(frame)]

            
                if (self.buf1current + Int(frame)) >= totalRecordSize 
                    self.engine.stop()
                    self.running = false
                


            
            self.buf1current += Int(micConvertedBuffer!.frameLength)
            try self.measurementFile.write(from: micConvertedBuffer!)

         catch 
            print(NSString(string: "write failed"))
        

        if internalRefLoop == false 
            if self.in2firstRun == true
                var info = mach_timebase_info()
                mach_timebase_info(&info)
                let currentTime = mach_absolute_time()
                let nanos = currentTime * UInt64(info.numer) / UInt64(info.denom)
                self.in2startTime = Int(nanos)
                self.in2firstRun = false
            
            do 
                let refConvertedBuffer = AVAudioPCMBuffer(pcmFormat: stimulusFormat!, frameCapacity: buffer2.frameCapacity)
                let refInputBlock: AVAudioConverterInputBlock =  inNumPackets, outStatus in
                    outStatus.pointee = AVAudioConverterInputStatus.haveData
                    return buffer2

                

                var error: NSError? = nil

                let status =  refAudioConverter?.convert(to: refConvertedBuffer!, error: &error, withInputFrom: refInputBlock)
                //print(status)
                let floatData = refConvertedBuffer?.floatChannelData?.pointee
                for frame in 0..<refConvertedBuffer!.frameLength
                    if (self.buf2current + Int(frame)) < totalRecordSize
                        self.in2Buf[self.buf2current + Int(frame)] = floatData![Int(frame)]
                    

                
                if (self.numberOfSamples + Int(buffer2.frameLength)) <= totalRecordSize
                    self.buf2current += Int(refConvertedBuffer!.frameLength)
                    try self.stimulusFile.write(from: refConvertedBuffer!)                else 
                    self.engine.stop()
                    self.running = false
                


             catch 
                print(NSString(string: "write failed"))
            
            
        
    )


    assert(engine.inputNode != nil)
    running = true
    try! engine.start()

所以上面的方法是我的整个班级。目前 installTap 上的每个缓冲区调用都将输入直接写入 WAV 文件。这就是我每次都可以看到两个最终结果不同的地方。我尝试添加 startTime 变量并减去两者,但结果仍然不同。

我是否需要考虑到我的输出也会有可能因每次调用而异的延迟?如果是这样,我如何将这个时间添加到等式中?我正在寻找的是两个输入和输出都具有相对时间,以便我可以比较它们。只要我能确定结束通话时间,不同的硬件延迟不会有太大影响。

【问题讨论】:

【参考方案1】:

如果您正在进行实时测量,您可能希望使用AVAudioSinkNode 而不是 Tap。 Sink 节点是新的,并与您正在使用的 AVAudioSourceNode 一起引入。安装 Tap 后,您将无法获得精确的计时。

【讨论】:

以上是关于用于音频测量的 Swift AVFoundation 计时信息的主要内容,如果未能解决你的问题,请参考以下文章

Swift 4 AVFoundation - 同时录制多个音频源

swift 用于在swift 4中播放文件音频的配方

声音分贝测量、曲线绘制

Swift AVAudioEngine:更改MacOS的音频输入设备

测量音频延迟

录制时通过 Swift OSX 音频