swift上的iOS录音机可视化
Posted
技术标签:
【中文标题】swift上的iOS录音机可视化【英文标题】:iOS voice recorder visualization on swift 【发布时间】:2015-03-31 13:27:26 【问题描述】:我想像在原始语音备忘录应用中一样在记录上进行可视化:
我知道我可以得到关卡 - 更新仪表 - peakPowerForChannel: - 平均PowerForChannel:
但是如何绘制图形,我应该自定义吗?我可以使用免费/付费来源吗?
【问题讨论】:
您是否尝试保存关卡并仅使用drawRect
绘制线条?
不,这是最好的方法吗? :)
我不知道。但是编程的一部分是尝试并查看哪些有效,哪些无效:)。
我希望有人已经尝试过,而且我不必重新发明热水:) 这似乎是语音应用程序的通用界面
任何解决方案你得到了吗?
【参考方案1】:
我遇到了同样的问题。我想创建一个语音备忘录克隆。最近,我找到了一个解决方案,并在 medium 上写了一篇关于它的文章。
我从 UIView 类创建了一个子类,并使用 CGRect 绘制了条形。
import UIKit
class AudioVisualizerView: UIView
// Bar width
var barWidth: CGFloat = 4.0
// Indicate that waveform should draw active/inactive state
var active = false
didSet
if self.active
self.color = UIColor.red.cgColor
else
self.color = UIColor.gray.cgColor
// Color for bars
var color = UIColor.gray.cgColor
// Given waveforms
var waveforms: [Int] = Array(repeating: 0, count: 100)
// MARK: - Init
override init (frame : CGRect)
super.init(frame : frame)
self.backgroundColor = UIColor.clear
required init?(coder decoder: NSCoder)
super.init(coder: decoder)
self.backgroundColor = UIColor.clear
// MARK: - Draw bars
override func draw(_ rect: CGRect)
guard let context = UIGraphicsGetCurrentContext() else
return
context.clear(rect)
context.setFillColor(red: 0, green: 0, blue: 0, alpha: 0)
context.fill(rect)
context.setLineWidth(1)
context.setStrokeColor(self.color)
let w = rect.size.width
let h = rect.size.height
let t = Int(w / self.barWidth)
let s = max(0, self.waveforms.count - t)
let m = h / 2
let r = self.barWidth / 2
let x = m - r
var bar: CGFloat = 0
for i in s ..< self.waveforms.count
var v = h * CGFloat(self.waveforms[i]) / 50.0
if v > x
v = x
else if v < 3
v = 3
let oneX = bar * self.barWidth
var oneY: CGFloat = 0
let twoX = oneX + r
var twoY: CGFloat = 0
var twoS: CGFloat = 0
var twoE: CGFloat = 0
var twoC: Bool = false
let threeX = twoX + r
let threeY = m
if i % 2 == 1
oneY = m - v
twoY = m - v
twoS = -180.degreesToRadians
twoE = 0.degreesToRadians
twoC = false
else
oneY = m + v
twoY = m + v
twoS = 180.degreesToRadians
twoE = 0.degreesToRadians
twoC = true
context.move(to: CGPoint(x: oneX, y: m))
context.addLine(to: CGPoint(x: oneX, y: oneY))
context.addArc(center: CGPoint(x: twoX, y: twoY), radius: r, startAngle: twoS, endAngle: twoE, clockwise: twoC)
context.addLine(to: CGPoint(x: threeX, y: threeY))
context.strokePath()
bar += 1
对于记录功能,我使用了installTap实例方法来记录、监控和观察节点的输出。
let inputNode = self.audioEngine.inputNode
guard let format = self.format() else
return
inputNode.installTap(onBus: 0, bufferSize: 1024, format: format) (buffer, time) in
let level: Float = -50
let length: UInt32 = 1024
buffer.frameLength = length
let channels = UnsafeBufferPointer(start: buffer.floatChannelData, count: Int(buffer.format.channelCount))
var value: Float = 0
vDSP_meamgv(channels[0], 1, &value, vDSP_Length(length))
var average: Float = ((value == 0) ? -100 : 20.0 * log10f(value))
if average > 0
average = 0
else if average < -100
average = -100
let silent = average < level
let ts = NSDate().timeIntervalSince1970
if ts - self.renderTs > 0.1
let floats = UnsafeBufferPointer(start: channels[0], count: Int(buffer.frameLength))
let frame = floats.map( (f) -> Int in
return Int(f * Float(Int16.max))
)
DispatchQueue.main.async
let seconds = (ts - self.recordingTs)
self.timeLabel.text = seconds.toTimeString
self.renderTs = ts
let len = self.audioView.waveforms.count
for i in 0 ..< len
let idx = ((frame.count - 1) * i) / len
let f: Float = sqrt(1.5 * abs(Float(frame[idx])) / Float(Int16.max))
self.audioView.waveforms[i] = min(49, Int(f * 50))
self.audioView.active = !silent
self.audioView.setNeedsDisplay()
这是我写的文章,希望你能找到你要找的东西: https://medium.com/flawless-app-stories/how-i-created-apples-voice-memos-clone-b6cd6d65f580
该项目也可以在 GitHub 上找到: https://github.com/HassanElDesouky/VoiceMemosClone
请注意,我还是个初学者,很抱歉我的代码看起来不太干净!
【讨论】:
以上是关于swift上的iOS录音机可视化的主要内容,如果未能解决你的问题,请参考以下文章