我正在尝试通过Apples Multipeer Connectivity框架将音频从麦克风流式传输到另一个iPhone。为了进行音频捕获和回放,我使用了AVAudioEngine。
我通过在输入上安装一个抽头来从麦克风接收数据,由此我得到了AVAudioPCMBuffer,然后将其转换为UInt8数组,然后将其流式传输到另一部电话。
但是,当我将数组转换回AVAudioPCMBuffer时,我得到了EXC_BAD_ACCESS异常,编译器指向将字节数组再次转换为AVAudioPCMBuffer的方法。
这是我要处理,转换和流式处理输入内容的代码:
input.installTap(onBus: 0, bufferSize: 2048, format: input.inputFormat(forBus: 0), block: { (buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in let audioBuffer = self.typetobinary(buffer) stream.write(audioBuffer, maxLength: audioBuffer.count) })
我对将数据转换两种功能:
func binarytotype <T> (_ value: [UInt8], _: T.Type) -> T { return value.withUnsafeBufferPointer { UnsafeRawPointer($0.baseAddress!).load(as: T.self) } } func typetobinary<T>(_ value: T) -> [UInt8] { var data = [UInt8](repeating: 0, count: MemoryLayout<T>.size) data.withUnsafeMutableBufferPointer { UnsafeMutableRawPointer($0.baseAddress!).storeBytes(of: value, as: T.self) } return data }
在接收端:
func session(_ session: MCSession, didReceive stream: InputStream, withName streamName: String, fromPeer peerID: MCPeerID) { if streamName == "voice" { stream.schedule(in: RunLoop.current, forMode: .defaultRunLoopMode) stream.open() var bytes = [UInt8](repeating: 0, count: 8) stream.read(&bytes, maxLength: bytes.count) let audioBuffer = self.binarytotype(bytes, AVAudioPCMBuffer.self) //Here is where the app crashes do { try engine.start() audioPlayer.scheduleBuffer(audioBuffer, completionHandler: nil) audioPlayer.play() }catch let error { print(error.localizedDescription) } } }
问题是,我可以在来回传输字节数组(在同一部电话中)之前来回转换字节数组并从中播放声音,但不能在接收端创建AVAudioPCMBuffer。有谁知道为什么转换无法在接收端进行?这是正确的方法吗?
任何帮助,对此的想法/投入将不胜感激。
您的AVAudioPCMBuffer序列化/反序列化是错误的。
AVAudioPCMBuffer
Swift3的投放方式已发生很大变化,并且似乎需要比Swift2更多的复制。
您可以在[UInt8]和之间进行转换AVAudioPCMBuffer:
[UInt8]
注意 :此代码假定单浮点数据为44.1kHz。 您可能要更改它。
func copyAudioBufferBytes(_ audioBuffer: AVAudioPCMBuffer) -> [UInt8] { let srcLeft = audioBuffer.floatChannelData![0] let bytesPerFrame = audioBuffer.format.streamDescription.pointee.mBytesPerFrame let numBytes = Int(bytesPerFrame * audioBuffer.frameLength) // initialize bytes to 0 (how to avoid?) var audioByteArray = [UInt8](repeating: 0, count: numBytes) // copy data from buffer srcLeft.withMemoryRebound(to: UInt8.self, capacity: numBytes) { srcByteData in audioByteArray.withUnsafeMutableBufferPointer { $0.baseAddress!.initialize(from: srcByteData, count: numBytes) } } return audioByteArray } func bytesToAudioBuffer(_ buf: [UInt8]) -> AVAudioPCMBuffer { // format assumption! make this part of your protocol? let fmt = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 44100, channels: 1, interleaved: true) let frameLength = UInt32(buf.count) / fmt.streamDescription.pointee.mBytesPerFrame let audioBuffer = AVAudioPCMBuffer(pcmFormat: fmt, frameCapacity: frameLength) audioBuffer.frameLength = frameLength let dstLeft = audioBuffer.floatChannelData![0] // for stereo // let dstRight = audioBuffer.floatChannelData![1] buf.withUnsafeBufferPointer { let src = UnsafeRawPointer($0.baseAddress!).bindMemory(to: Float.self, capacity: Int(frameLength)) dstLeft.initialize(from: src, count: Int(frameLength)) } return audioBuffer }