插入或拔出耳机时 iOS 应用程序崩溃

Posted

技术标签:

【中文标题】插入或拔出耳机时 iOS 应用程序崩溃【英文标题】:iOS app crashes when headphones are plugged in or unplugged 【发布时间】:2013-05-01 02:29:50 【问题描述】:

我正在 ios 6.1.3 iPad2 和新 iPad 上运行 SIP 音频流应用。

我在我的 iPad 上启动我的应用程序(没有插入任何东西)。 音频作品。 我插上耳机。 应用程序崩溃:malloc: error for object 0x....: pointer being free was not assigned or EXC_BAD_ACCESS

或者:

我在 iPad 上启动我的应用程序(插入耳机)。 音频来自耳机。 我拔下耳机。 应用程序崩溃:malloc: error for object 0x....: pointer being free was not assigned or EXC_BAD_ACCESS

应用代码使用基于http://code.google.com/p/ios-coreaudio-example/ 示例代码的AudioUnit api(见下文)。

我使用 kAudioSessionProperty_AudioRouteChange 回调来获得更改意识。所以操作系统声音管理器有三个回调: 1) 处理录制的麦克风样本 2) 为演讲者提供样品 3) 通知音频硬件存在

经过大量测试,我的感觉是棘手的代码是执行麦克风捕获的代码。在插入/拔出操作之后,大多数情况下,记录回调在调用 RouteChange 之前被调用几次,导致后来的“分段错误”,并且永远不会调用 RouteChange 回调。更具体地说,我认为 AudioUnitRender 函数会导致“内存错误访问”,而根本不会抛出异常。

我的感觉是,非原子录音回调代码会随着操作系统更新与声音设备相关的结构而竞争。因此,录制回调的非原子性更可能是操作系统硬件更新和录制回调的并发。

我修改了我的代码,以使记录回调尽可能薄,但我的感觉是,我的应用程序的其他线程带来的高处理负载正在引发前面描述的并发竞争。因此,由于 AudioUnitRender 访问错误,代码的其他部分会出现 malloc/free 错误。

我尝试通过以下方式减少录制回调延迟:

UInt32 numFrames = 256;
UInt32 dataSize = sizeof(numFrames);

AudioUnitSetProperty(audioUnit,
    kAudioUnitProperty_MaximumFramesPerSlice,
    kAudioUnitScope_Global,
    0,
    &numFrames,
    dataSize);

我试图提升有问题的代码:

dispatch_async(dispatch_get_main_queue(), ^

有人对此有提示或解决方案吗? 为了重现错误,这里是我的音频会话代码:

//
//  IosAudioController.m
//  Aruts
//
//  Created by Simon Epskamp on 10/11/10.
//  Copyright 2010 __MyCompanyName__. All rights reserved.
//

#import "IosAudioController.h"
#import <AudioToolbox/AudioToolbox.h>

#define kOutputBus 0
#define kInputBus 1

IosAudioController* iosAudio;

void checkStatus(int status) 
    if (status) 
        printf("Status not 0! %d\n", status);
        // exit(1);
    


/**
 * This callback is called when new audio data from the microphone is available.
 */
static OSStatus recordingCallback(void *inRefCon, 
    AudioUnitRenderActionFlags *ioActionFlags, 
    const AudioTimeStamp *inTimeStamp, 
    UInt32 inBusNumber, 
    UInt32 inNumberFrames, 
    AudioBufferList *ioData) 

    // Because of the way our audio format (setup below) is chosen:
    // we only need 1 buffer, since it is mono
    // Samples are 16 bits = 2 bytes.
    // 1 frame includes only 1 sample

    AudioBuffer buffer;

    buffer.mNumberChannels = 1;
    buffer.mDataByteSize = inNumberFrames * 2;
    buffer.mData = malloc( inNumberFrames * 2 );

    // Put buffer in a AudioBufferList
    AudioBufferList bufferList;
    bufferList.mNumberBuffers = 1;
    bufferList.mBuffers[0] = buffer;

    NSLog(@"Recording Callback 1 0x%x ? 0x%x",buffer.mData, 
        bufferList.mBuffers[0].mData);

    // Then:
    // Obtain recorded samples

    OSStatus status;
    status = AudioUnitRender([iosAudio audioUnit],
        ioActionFlags, 
        inTimeStamp,
        inBusNumber,
        inNumberFrames,
        &bufferList);
        checkStatus(status);

    // Now, we have the samples we just read sitting in buffers in bufferList
    // Process the new data
    [iosAudio processAudio:&bufferList];

    NSLog(@"Recording Callback 2 0x%x ? 0x%x",buffer.mData, 
        bufferList.mBuffers[0].mData);

    // release the malloc'ed data in the buffer we created earlier
    free(bufferList.mBuffers[0].mData);

    return noErr;


/**
 * This callback is called when the audioUnit needs new data to play through the
 * speakers. If you don't have any, just don't write anything in the buffers
 */
static OSStatus playbackCallback(void *inRefCon, 
    AudioUnitRenderActionFlags *ioActionFlags, 
    const AudioTimeStamp *inTimeStamp, 
    UInt32 inBusNumber, 
    UInt32 inNumberFrames, 
    AudioBufferList *ioData) 
        // Notes: ioData contains buffers (may be more than one!)
        // Fill them up as much as you can.
        // Remember to set the size value in each 
        // buffer to match how much data is in the buffer.

    for (int i=0; i < ioData->mNumberBuffers; i++) 
        // in practice we will only ever have 1 buffer, since audio format is mono
        AudioBuffer buffer = ioData->mBuffers[i];

        // NSLog(@"  Buffer %d has %d channels and wants %d bytes of data.", i, 
            buffer.mNumberChannels, buffer.mDataByteSize);

        // copy temporary buffer data to output buffer
        UInt32 size = min(buffer.mDataByteSize,
            [iosAudio tempBuffer].mDataByteSize);

        // dont copy more data then we have, or then fits
        memcpy(buffer.mData, [iosAudio tempBuffer].mData, size);
        // indicate how much data we wrote in the buffer
        buffer.mDataByteSize = size;

        // uncomment to hear random noise
        /*
         * UInt16 *frameBuffer = buffer.mData;
         * for (int j = 0; j < inNumberFrames; j++) 
         *     frameBuffer[j] = rand();
         * 
         */
    

    return noErr;


@implementation IosAudioController
@synthesize audioUnit, tempBuffer;

void propListener(void *inClientData,
    AudioSessionPropertyID inID,
    UInt32 inDataSize,
    const void *inData) 

    if (inID == kAudioSessionProperty_AudioRouteChange) 

        UInt32 isAudioInputAvailable;
        UInt32 size = sizeof(isAudioInputAvailable);
        CFStringRef newRoute;
        size = sizeof(CFStringRef);

        AudioSessionGetProperty(kAudioSessionProperty_AudioRoute, &size, &newRoute);

        if (newRoute) 
            CFIndex length = CFStringGetLength(newRoute);
            CFIndex maxSize = CFStringGetMaximumSizeForEncoding(length,
                kCFStringEncodingUTF8);

            char *buffer = (char *)malloc(maxSize);
            CFStringGetCString(newRoute, buffer, maxSize,
                kCFStringEncodingUTF8);

            //CFShow(newRoute);
            printf("New route is %s\n",buffer);

            if (CFStringCompare(newRoute, CFSTR("HeadsetInOut"), NULL) == 
                kCFCompareEqualTo) // headset plugged in
            
                printf("Headset\n");
             else 
                printf("Another device\n");

                UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
                AudioSessionSetProperty(kAudioSessionProperty_OverrideAudioRoute,
                    sizeof (audioRouteOverride),&audioRouteOverride);
            
            printf("New route is %s\n",buffer);
            free(buffer);
        
        newRoute = nil;
     


/**
 * Initialize the audioUnit and allocate our own temporary buffer.
 * The temporary buffer will hold the latest data coming in from the microphone,
 * and will be copied to the output when this is requested.
 */
- (id) init 
    self = [super init];
    OSStatus status;

    // Initialize and configure the audio session
    AudioSessionInitialize(NULL, NULL, NULL, self);

    UInt32 audioCategory = kAudioSessionCategory_PlayAndRecord;
    AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, 
        sizeof(audioCategory), &audioCategory);
    AudioSessionAddPropertyListener(kAudioSessionProperty_AudioRouteChange, 
        propListener, self);

    Float32 preferredBufferSize = .020;
    AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, 
        sizeof(preferredBufferSize), &preferredBufferSize);

    AudioSessionSetActive(true);

    // Describe audio component
    AudioComponentDescription desc;
    desc.componentType = kAudioUnitType_Output;
    desc.componentSubType = 
        kAudioUnitSubType_VoiceProcessingIO/*kAudioUnitSubType_RemoteIO*/;
    desc.componentFlags = 0;
    desc.componentFlagsMask = 0;
    desc.componentManufacturer = kAudioUnitManufacturer_Apple;

    // Get component
    AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);

    // Get audio units
    status = AudioComponentInstanceNew(inputComponent, &audioUnit);
    checkStatus(status);

    // Enable IO for recording
    UInt32 flag = 1;
    status = AudioUnitSetProperty(audioUnit,
        kAudioOutputUnitProperty_EnableIO, 
        kAudioUnitScope_Input, 
        kInputBus,
        &flag, 
        sizeof(flag));
        checkStatus(status);

    // Enable IO for playback
    flag = 1;
    status = AudioUnitSetProperty(audioUnit, 
        kAudioOutputUnitProperty_EnableIO, 
        kAudioUnitScope_Output, 
        kOutputBus,
        &flag, 
        sizeof(flag));

    checkStatus(status);

    // Describe format
    AudioStreamBasicDescription audioFormat;
    audioFormat.mSampleRate = 8000.00;
    //audioFormat.mSampleRate = 44100.00;
    audioFormat.mFormatID = kAudioFormatLinearPCM;
    audioFormat.mFormatFlags = 
        kAudioFormatFlagsCanonical/* kAudioFormatFlagIsSignedInteger | 
        kAudioFormatFlagIsPacked*/;
    audioFormat.mFramesPerPacket = 1;
    audioFormat.mChannelsPerFrame = 1;
    audioFormat.mBitsPerChannel = 16;
    audioFormat.mBytesPerPacket = 2;
    audioFormat.mBytesPerFrame = 2;

    // Apply format
    status = AudioUnitSetProperty(audioUnit, 
        kAudioUnitProperty_StreamFormat, 
        kAudioUnitScope_Output, 
        kInputBus, 
        &audioFormat, 
        sizeof(audioFormat));

    checkStatus(status);
    status = AudioUnitSetProperty(audioUnit, 
        kAudioUnitProperty_StreamFormat, 
        kAudioUnitScope_Input, 
        kOutputBus, 
        &audioFormat, 
        sizeof(audioFormat));

    checkStatus(status);


    // Set input callback
    AURenderCallbackStruct callbackStruct;
    callbackStruct.inputProc = recordingCallback;
    callbackStruct.inputProcRefCon = self;
    status = AudioUnitSetProperty(audioUnit,
        AudioOutputUnitProperty_SetInputCallback, 
        kAudioUnitScope_Global, 
        kInputBus, 
        &callbackStruct, 
        sizeof(callbackStruct));

    checkStatus(status);
    // Set output callback
    callbackStruct.inputProc = playbackCallback;
    callbackStruct.inputProcRefCon = self;
    status = AudioUnitSetProperty(audioUnit,
        kAudioUnitProperty_SetRenderCallback, 
        kAudioUnitScope_Global, 
        kOutputBus,
        &callbackStruct, 
        sizeof(callbackStruct));

    checkStatus(status);

    // Disable buffer allocation for the recorder (optional - do this if we want to 
    // pass in our own)

    flag = 0;
    status = AudioUnitSetProperty(audioUnit, 
        kAudioUnitProperty_ShouldAllocateBuffer,
        kAudioUnitScope_Output, 
        kInputBus,
        &flag, 
        sizeof(flag)); 


    flag = 0;
    status = AudioUnitSetProperty(audioUnit,
    kAudioUnitProperty_ShouldAllocateBuffer, 
        kAudioUnitScope_Output,
        kOutputBus,
        &flag,
        sizeof(flag));

    // Allocate our own buffers (1 channel, 16 bits per sample, thus 16 bits per 
    // frame, thus 2 bytes per frame).
    // Practice learns the buffers used contain 512 frames,
    // if this changes it will be fixed in processAudio.
    tempBuffer.mNumberChannels = 1;
    tempBuffer.mDataByteSize = 512 * 2;
    tempBuffer.mData = malloc( 512 * 2 );

    // Initialise
    status = AudioUnitInitialize(audioUnit);
    checkStatus(status);

    return self;


/**
 * Start the audioUnit. This means data will be provided from
 * the microphone, and requested for feeding to the speakers, by
 * use of the provided callbacks.
 */
- (void) start 
    OSStatus status = AudioOutputUnitStart(audioUnit);
    checkStatus(status);


/**
 * Stop the audioUnit
 */
- (void) stop 
    OSStatus status = AudioOutputUnitStop(audioUnit);
    checkStatus(status);


/**
 * Change this function to decide what is done with incoming
 * audio data from the microphone.
 * Right now we copy it to our own temporary buffer.
 */
- (void) processAudio: (AudioBufferList*) bufferList 
    AudioBuffer sourceBuffer = bufferList->mBuffers[0];

    // fix tempBuffer size if it's the wrong size
    if (tempBuffer.mDataByteSize != sourceBuffer.mDataByteSize) 
        free(tempBuffer.mData);
        tempBuffer.mDataByteSize = sourceBuffer.mDataByteSize;
        tempBuffer.mData = malloc(sourceBuffer.mDataByteSize);
    

    // copy incoming audio data to temporary buffer
    memcpy(tempBuffer.mData, bufferList->mBuffers[0].mData, 
        bufferList->mBuffers[0].mDataByteSize);
    usleep(1000000); // <- TO REPRODUCE THE ERROR, CONCURRENCY MORE LIKELY



/**
 * Clean up.
 */
- (void) dealloc 
    [super dealloc];
    AudioUnitUninitialize(audioUnit);
    free(tempBuffer.mData);


@end

【问题讨论】:

您是否尝试在 malloc_error_break 添加断点 - 它应该为您提供被释放两次的指针。 你在泄露char *buffer = (char *)malloc(maxSize);吗?顺便说一句,您不需要任何代码 - CFStringRef 是通往 NSString 的免费桥梁,因此您可以简单地将 newRoute 类型转换为 NSString* 并使用 NSString 方法。 我忘记在测试代码中包含我提供了对您 @Mar0ux 突出显示的问题的修复,但在我的应用程序中它们已经修复(free(buffer);newRoute = nil;)。并不总是错误是 malloc: * error for object 0x....: pointer being free was not assigned ,其他时候错误是 memcpy 中的 EXC_BAD_ACCESS。 我还尝试删除回调kAudioOutputUnitProperty_SetInputCallback,只保留回调kAudioUnitProperty_SetRenderCallback,并将录制回调代码移动到播放回调代码的末尾。此外,根据评论 // 禁用记录器的缓冲区分配(可选 - 如果我们想传入我们自己的,请执行此操作) 我删除了 kInputBuskAudioUnitProperty_ShouldAllocateBuffer 属性,结果相同 我对我的 SIP 应用程序进行了深刻的更改。我删除了 ffmpeg-MPEG4 视频编码以获得可观的处理负载节省。但是,问题仍然存在。任何提示/线索/解决方案? 【参考方案1】:

根据我的测试,触发SEGV错误的行最终是

AudioSessionSetProperty(kAudioSessionProperty_OverrideAudioRoute,
                                    sizeof (audioRouteOverride),&audioRouteOverride);

在飞行途中更改 AudioUnit 链的属性总是很棘手,但如果您在重新路由之前停止 AudioUnit,然后重新启动它,它会用完它存储的所有缓冲区,然后继续使用新参数。

这是可以接受的,还是您需要在更改路线和重新开始录制之间减少间隔?

我所做的是:

void propListener(void *inClientData,
              AudioSessionPropertyID inID,
              UInt32 inDataSize,
              const void *inData) 

[iosAudio stop];
// ...

[iosAudio start];

我的 iPhone 5 不再崩溃(您的里程可能会因不同的硬件而异)

我对这种行为的最合乎逻辑的解释是,这些测试在一定程度上支持了渲染管道是异步的。如果您花很长时间来操作缓冲区,它们就会一直排队。但是,如果您更改 AudioUnit 的设置,则会在渲染队列中触发大规模重置,并带来未知的副作用。麻烦的是,这些更改是同步的,会以追溯方式影响所有耐心等待轮到它们的异步调用。

如果您不关心丢失的样本,您可以执行以下操作:

static BOOL isStopped = NO;
static OSStatus recordingCallback(void *inRefCon, //...

  if(isStopped) 
    NSLog(@"Stopped, ignoring");
    return noErr;
  
  // ...


static OSStatus playbackCallback(void *inRefCon, //...

  if(isStopped) 
    NSLog(@"Stopped, ignoring");
    return noErr;
  
  // ...


// ...

/**
 * Start the audioUnit. This means data will be provided from
 * the microphone, and requested for feeding to the speakers, by
 * use of the provided callbacks.
 */
- (void) start 
    OSStatus status = AudioOutputUnitStart(_audioUnit);
    checkStatus(status);

    isStopped = NO;


/**
 * Stop the audioUnit
 */
- (void) stop 

    isStopped = YES;

    OSStatus status = AudioOutputUnitStop(_audioUnit);
    checkStatus(status);


// ...

【讨论】:

感谢@krug,您的建议解决了此代码的问题。但是,在我的应用程序中,在插入/拔出活动之后和调用 propListener 回调之前调用录制和播放回调 3-4 次。在这些录制和播放回调中,一些外部结构指针不断被修改为错误值,直到调用 propListener 回调,此时这些指针的修改停止,但已经造成损坏。因此,在通信结束时,当代码破坏外部结构时,稍后会产生错误指针未分配 解决方案可能是在停止后立即锁定进程,在处理完所有样本后解锁? 嗨@krug,我已经尝试使用@synchronized(iosAudio) ... (如互斥锁)保护录制和播放回调,一旦您的第一个答案发生变化,我将同步块放在录制和播放回调中包含在相同的结果中。 是的,但您的 AudioUnit 不会改变。因此@synchronize 不会锁定任何东西。 如果有帮助,我添加了一种方法来忽略切换期间发生的样本。

以上是关于插入或拔出耳机时 iOS 应用程序崩溃的主要内容,如果未能解决你的问题,请参考以下文章

插入/拔出耳机时如何获得通知?苹果

当我的应用程序处于后台模式时,检测从 iPhone 插入或拔出耳机插孔

ios 耳机插入拔出检测

OSX 如何判断用户何时插入/拔出带有内置麦克风的耳机

检测耳机插入和拔出

通过AppWidgetProvider判断耳机是不是插入,拔出时暂停音乐