使用 AVAudioEngine 的详细信息
Posted
技术标签:
【中文标题】使用 AVAudioEngine 的详细信息【英文标题】:Details on using the AVAudioEngine 【发布时间】:2015-10-15 18:30:28 【问题描述】:背景:我发现了一个名为“实践中的 AVAudioEngine”的 Apple WWDC 会议,并正在尝试制作类似于 43:35 (https://youtu.be/FlMaxen2eyw?t=2614) 显示的上一个演示的内容。我使用的是 SpriteKit 而不是 SceneKit,但原理是一样的:我想生成球体,将它们扔来扔去,当它们碰撞时,引擎会播放每个球体独有的声音。
问题:
我想要一个唯一的 AudioPlayerNode 附加到每个 SpriteKitNode,以便我可以为每个球体播放不同的声音。即现在,如果我创建两个球体并为它们的每个 AudioPlayerNode 设置不同的音高,即使原始球体发生碰撞,似乎也只有最近创建的 AudioPlayerNode 正在播放。在演示期间,他提到“我正在为每个球绑定一个球员,一个专注的球员”。我该怎么做呢?
每次发生新的碰撞时都会出现音频点击/伪影。我假设这与 AVAudioPlayerNodeBufferOptions 和/或我试图在每次发生联系时非常快速地创建、调度和使用缓冲区的事实有关,这不是最有效的方法。对此有什么好的解决方法?
代码:如视频中所述,“...对于每个诞生在这个世界上的球,也会创建一个新的玩家节点”。我有一个单独的球体类,它有一个返回 SpriteKitNode 的方法,并且每次调用它时都会创建一个 AudioPlayerNode:
class Sphere
var sphere: SKSpriteNode = SKSpriteNode(color: UIColor(), size: CGSize())
var sphereScale: CGFloat = CGFloat(0.01)
var spherePlayer = AVAudioPlayerNode()
let audio = Audio()
let sphereCollision: UInt32 = 0x1 << 0
func createSphere(position: CGPoint, pitch: Float) -> SKSpriteNode
let texture = SKTexture(imageNamed: "Slice")
let collisionTexture = SKTexture(imageNamed: "Collision")
// Define the node
sphere = SKSpriteNode(texture: texture, size: texture.size())
sphere.position = position
sphere.name = "sphere"
sphere.physicsBody = SKPhysicsBody(texture: collisionTexture, size: sphere.size)
sphere.physicsBody?.dynamic = true
sphere.physicsBody?.mass = 0
sphere.physicsBody?.restitution = 0.5
sphere.physicsBody?.usesPreciseCollisionDetection = true
sphere.physicsBody?.categoryBitMask = sphereCollision
sphere.physicsBody?.contactTestBitMask = sphereCollision
sphere.zPosition = 1
// Create AudioPlayerNode
spherePlayer = audio.createPlayer(pitch)
return sphere
这是我创建 AudioPCMBuffers 和 AudioPlayerNodes 的音频类
class Audio
let engine: AVAudioEngine = AVAudioEngine()
func createBuffer(name: String, type: String) -> AVAudioPCMBuffer
let audioFilePath = NSBundle.mainBundle().URLForResource(name as String, withExtension: type as String)!
let audioFile = try! AVAudioFile(forReading: audioFilePath)
let buffer = AVAudioPCMBuffer(PCMFormat: audioFile.processingFormat, frameCapacity: UInt32(audioFile.length))
try! audioFile.readIntoBuffer(buffer)
return buffer
func createPlayer(pitch: Float) -> AVAudioPlayerNode
let player = AVAudioPlayerNode()
let buffer = self.createBuffer("PianoC1", type: "wav")
let pitcher = AVAudioUnitTimePitch()
let delay = AVAudioUnitDelay()
pitcher.pitch = pitch
delay.delayTime = 0.2
delay.feedback = 90
delay.wetDryMix = 0
engine.attachNode(pitcher)
engine.attachNode(player)
engine.attachNode(delay)
engine.connect(player, to: pitcher, format: buffer.format)
engine.connect(pitcher, to: delay, format: buffer.format)
engine.connect(delay, to: engine.mainMixerNode, format: buffer.format)
engine.prepare()
try! engine.start()
return player
然后在我的 GameScene 类中测试碰撞,安排缓冲区并在发生接触时播放 AudioPlayerNode
func didBeginContact(contact: SKPhysicsContact)
let firstBody: SKPhysicsBody = contact.bodyA
if (firstBody.categoryBitMask & sphere.sphereCollision != 0)
let buffer1 = audio.createBuffer("PianoC1", type: "wav")
sphere.spherePlayer.scheduleBuffer(buffer1, atTime: nil, options: AVAudioPlayerNodeBufferOptions.Interrupts, completionHandler: nil)
sphere.spherePlayer.play()
我是 Swift 新手,只有基本的编程知识,因此欢迎提出任何建议/批评。
【问题讨论】:
【参考方案1】:我一直在使用 scenekit 中的 AVAudioEngine 并尝试做其他事情,但这将是您正在寻找的:
https://developer.apple.com/library/mac/samplecode/AVAEGamingExample/Listings/AVAEGamingExample_AudioEngine_m.html
它解释了以下过程: 1-实例化你自己的 AVAudioEngine 子类 2-为每个 AVAudioPlayer 加载 PCMBuffers 的方法 3-更改环境节点的参数以适应大量弹球对象的混响
编辑:转换、测试并添加了一些功能:
1-您创建一个 AVAudioEngine 的子类,例如将其命名为 AudioLayerEngine。这是为了访问 AVAudioUnit 效果,例如失真、延迟、音高和许多其他可用作 AudioUnit 的效果。 2-通过为音频引擎设置一些配置来初始化,例如渲染算法,暴露 AVAudioEnvironmentNode 以播放 SCNNode 对象或 SKNode 对象的 3D 位置(如果您处于 2D 中但想要 3D 效果) 3-创建一些辅助方法来为您想要的每个 AudioUnit 效果加载预设 4-创建一个辅助方法来创建音频播放器,然后将其添加到您想要的任何节点,因为 SCNNode 接受返回 [AVAudioPlayer] 或 [SCNAudioPlayer] 的 .audioPlayers 方法 5-开始播放。
我已经粘贴了整个类以供参考,以便您可以根据需要对其进行构建,但请记住,如果您将其与 SceneKit 或 SpriteKit 耦合,则使用此音频引擎来管理您的所有声音,而不是 SceneKit 的内部 AVAudioEngine。这意味着您在 AwakeFromNib 方法期间在 gameView 中实例化它
import Foundation
import SceneKit
import AVFoundation
class AudioLayerEngine:AVAudioEngine
var engine:AVAudioEngine!
var environment:AVAudioEnvironmentNode!
var outputBuffer:AVAudioPCMBuffer!
var voicePlayer:AVAudioPlayerNode!
var multiChannelEnabled:Bool!
//audio effects
let delay = AVAudioUnitDelay()
let distortion = AVAudioUnitDistortion()
let reverb = AVAudioUnitReverb()
override init()
super.init()
engine = AVAudioEngine()
environment = AVAudioEnvironmentNode()
engine.attachNode(self.environment)
voicePlayer = AVAudioPlayerNode()
engine.attachNode(voicePlayer)
voicePlayer.volume = 1.0
outputBuffer = loadVoice()
wireEngine()
startEngine()
voicePlayer.scheduleBuffer(self.outputBuffer, completionHandler: nil)
voicePlayer.play()
func startEngine()
do
try engine.start()
catch
print("error loading engine")
func loadVoice()->AVAudioPCMBuffer
let URL = NSURL(fileURLWithPath: NSBundle.mainBundle().pathForResource("art.scnassets/sounds/interface/test", ofType: "aiff")!)
do
let soundFile = try AVAudioFile(forReading: URL, commonFormat: AVAudioCommonFormat.PCMFormatFloat32, interleaved: false)
outputBuffer = AVAudioPCMBuffer(PCMFormat: soundFile.processingFormat, frameCapacity: AVAudioFrameCount(soundFile.length))
do
try soundFile.readIntoBuffer(outputBuffer)
catch
print("somethign went wrong with loading the buffer into the sound fiel")
print("returning buffer")
return outputBuffer
catch
return outputBuffer
func wireEngine()
loadDistortionPreset(AVAudioUnitDistortionPreset.MultiCellphoneConcert)
engine.attachNode(distortion)
engine.attachNode(delay)
engine.connect(voicePlayer, to: distortion, format: self.outputBuffer.format)
engine.connect(distortion, to: delay, format: self.outputBuffer.format)
engine.connect(delay, to: environment, format: self.outputBuffer.format)
engine.connect(environment, to: engine.outputNode, format: constructOutputFormatForEnvironment())
func constructOutputFormatForEnvironment()->AVAudioFormat
let outputChannelCount = self.engine.outputNode.outputFormatForBus(1).channelCount
let hardwareSampleRate = self.engine.outputNode.outputFormatForBus(1).sampleRate
let environmentOutputConnectionFormat = AVAudioFormat(standardFormatWithSampleRate: hardwareSampleRate, channels: outputChannelCount)
multiChannelEnabled = false
return environmentOutputConnectionFormat
func loadDistortionPreset(preset: AVAudioUnitDistortionPreset)
distortion.loadFactoryPreset(preset)
func createPlayer(node: SCNNode)
let player = AVAudioPlayerNode()
distortion.loadFactoryPreset(AVAudioUnitDistortionPreset.SpeechCosmicInterference)
engine.attachNode(player)
engine.attachNode(distortion)
engine.connect(player, to: distortion, format: outputBuffer.format)
engine.connect(distortion, to: environment, format: constructOutputFormatForEnvironment())
let algo = AVAudio3DMixingRenderingAlgorithm.HRTF
player.renderingAlgorithm = algo
player.reverbBlend = 0.3
player.renderingAlgorithm = AVAudio3DMixingRenderingAlgorithm.HRTF
【讨论】:
虽然此链接可能会回答问题,但最好在此处包含答案的基本部分并提供链接以供参考。如果链接页面发生更改,仅链接答案可能会失效。 - From Review @BeauNouvelle 我已经用完整的测试代码和一个额外的功能编辑了答案以上是关于使用 AVAudioEngine 的详细信息的主要内容,如果未能解决你的问题,请参考以下文章
php Joomla的批量会话插入器!用您自己的数据库详细信息替换数据库连接详细信
微信网页授权认证获取用户的详细信息,实现自动登陆-微信公众号开发干货
微信公众号开发之网页授权认证获取用户的详细信息,实现自动登陆