PCM -> AAC(编码器)-> PCM(解码器)实时正确优化
Posted
技术标签:
【中文标题】PCM -> AAC(编码器)-> PCM(解码器)实时正确优化【英文标题】:PCM -> AAC (Encoder) -> PCM(Decoder) in real-time with correct optimization 【发布时间】:2014-03-15 06:59:57 【问题描述】:我正在尝试实现
AudioRecord (MIC) ->
PCM -> AAC Encoder
AAC -> PCM Decode
-> AudioTrack?? (SPEAKER)
在 android 4.1+ (API16) 上使用 MediaCodec
。
首先,我成功(但不确定是否优化)通过MediaCodec
实现了PCM -> AAC Encoder
,如下所示
private boolean setEncoder(int rate)
encoder = MediaCodec.createEncoderByType("audio/mp4a-latm");
MediaFormat format = new MediaFormat();
format.setString(MediaFormat.KEY_MIME, "audio/mp4a-latm");
format.setInteger(MediaFormat.KEY_CHANNEL_COUNT, 1);
format.setInteger(MediaFormat.KEY_SAMPLE_RATE, 44100);
format.setInteger(MediaFormat.KEY_BIT_RATE, 64 * 1024);//AAC-HE 64kbps
format.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectHE);
encoder.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
return true;
输入:PCM 比特率 = 44100(Hz) x 16(bit) x 1(Monoral) = 705600 bit/s
输出:AAC-HE 比特率 = 64 x 1024(bit) = 65536 bit/s
所以,数据大小大约压缩了x11
,我通过观察日志确认了这个工作
数据大小近似压缩x11
,到目前为止一切顺利。
现在,我有一个 UDP 服务器来接收编码数据,然后对其进行解码。
解码器配置文件设置如下:
private boolean setDecoder(int rate)
decoder = MediaCodec.createDecoderByType("audio/mp4a-latm");
MediaFormat format = new MediaFormat();
format.setString(MediaFormat.KEY_MIME, "audio/mp4a-latm");
format.setInteger(MediaFormat.KEY_CHANNEL_COUNT, 1);
format.setInteger(MediaFormat.KEY_SAMPLE_RATE, 44100);
format.setInteger(MediaFormat.KEY_BIT_RATE, 64 * 1024);//AAC-HE 64kbps
format.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectHE);
decoder.configure(format, null, null, 0);
return true;
由于 UDPserver 数据包缓冲区大小为1024
由于这是压缩的 AAC 数据,我希望解码大小为
大约 1024 x11
,但实际结果是
大概是x8
,感觉有点不对劲。
解码器代码如下:
IOudpPlayer = new Thread(new Runnable()
public void run()
SocketAddress sockAddress;
String address;
int len = 1024;
byte[] buffer2 = new byte[len];
DatagramPacket packet;
byte[] data;
ByteBuffer[] inputBuffers;
ByteBuffer[] outputBuffers;
ByteBuffer inputBuffer;
ByteBuffer outputBuffer;
MediaCodec.BufferInfo bufferInfo;
int inputBufferIndex;
int outputBufferIndex;
byte[] outData;
try
decoder.start();
isPlaying = true;
while (isPlaying)
try
packet = new DatagramPacket(buffer2, len);
ds.receive(packet);
sockAddress = packet.getSocketAddress();
address = sockAddress.toString();
Log.d("UDP Receiver"," received !!! from " + address);
data = new byte[packet.getLength()];
System.arraycopy(packet.getData(), packet.getOffset(), data, 0, packet.getLength());
Log.d("UDP Receiver", data.length + " bytes received");
//===========
inputBuffers = decoder.getInputBuffers();
outputBuffers = decoder.getOutputBuffers();
inputBufferIndex = decoder.dequeueInputBuffer(-1);
if (inputBufferIndex >= 0)
inputBuffer = inputBuffers[inputBufferIndex];
inputBuffer.clear();
inputBuffer.put(data);
decoder.queueInputBuffer(inputBufferIndex, 0, data.length, 0, 0);
bufferInfo = new MediaCodec.BufferInfo();
outputBufferIndex = decoder.dequeueOutputBuffer(bufferInfo, 0);
while (outputBufferIndex >= 0)
outputBuffer = outputBuffers[outputBufferIndex];
outputBuffer.position(bufferInfo.offset);
outputBuffer.limit(bufferInfo.offset + bufferInfo.size);
outData = new byte[bufferInfo.size];
outputBuffer.get(outData);
Log.d("AudioDecoder", outData.length + " bytes decoded");
decoder.releaseOutputBuffer(outputBufferIndex, false);
outputBufferIndex = decoder.dequeueOutputBuffer(bufferInfo, 0);
//===========
catch (IOException e)
decoder.stop();
catch (Exception e)
);
完整代码:
https://gist.github.com/kenokabe/9029256
还需要权限:
<uses-permission android:name="android.permission.INTERNET"></uses-permission>
<uses-permission android:name="android.permission.RECORD_AUDIO"></uses-permission>
一位为 Google 工作的成员 fadden 告诉我
看起来我没有在输出缓冲区上设置位置和限制。
我读过 VP8 Encoding Nexus 5 returns empty/0-Frames ,但不知道如何正确实现。
更新:我有点明白在哪里修改
看起来我没有在输出缓冲区上设置位置和限制。
,所以在Encoder和Decoder的while循环中添加2行如下:
outputBuffer.position(bufferInfo.offset);
outputBuffer.limit(bufferInfo.offset + bufferInfo.size);
https://gist.github.com/kenokabe/9029256/revisions
但是结果是一样的。
现在,我认为,错误:
W/SoftAAC2﹕ AAC decoder returned error 16388, substituting silence.
表示此解码器从一开始就完全失败。又是the data is not seekable
问题。 Seeking in AAC streams on Android 如果 AAC 解码器不能以这种方式处理流数据,而只能添加一些标头,这非常令人失望。
UPDATE2:UDP 接收器出错,因此修改
https://gist.github.com/kenokabe/9029256
现在,错误
W/SoftAAC2﹕ AAC decoder returned error 16388, substituting silence.
消失了!!
因此,它表明解码器至少可以正常工作,
但是,这是 1 个周期的日志:
D/AudioRecoder﹕ 4096 bytes read
D/AudioEncoder﹕ 360 bytes encoded
D/UDP Receiver﹕ received !!! from /127.0.0.1:39000
D/UDP Receiver﹕ 360 bytes received
D/AudioDecoder﹕ 8192 bytes decoded
PCM(4096)->AACencoded(360)->UDP-AAC(360)->(应该是)PCM(8192)
最终结果大约是原始 PCM 大小的 2 倍,还是有问题。
所以我的问题是
您能否正确优化我的示例代码以使其正常工作?
使用AudioTrack
API 即时播放解码的 PCM 原始数据是否正确,您能告诉我正确的方法吗?示例代码表示赞赏。
谢谢。
附言。我的项目目标是 Android4.1+(API16),我读过 API18(Andeoid 4.3+) 上的东西更容易,但是出于明显的兼容性原因,不幸的是,我不得不在这里跳过 MediaMuxer 等...
【问题讨论】:
我不是这方面的专家,但我认为每个数据包中都有额外的标头信息控制数据,使得原始有效负载小于 1024,因此 8192 实际上可能是合理的。 更有趣的是音频听起来像什么?压缩后可以播放吗?解压后能玩吗?尝试播放是确定压缩/解压缩是否实际工作的好方法。 可能,MediaPlayer
需要音频数据容器的标头,但我只是动态解码为 PCM 并尝试播放它。谢谢克里夫。
它基本上是 VoiceChat 应用程序。正如我在这里提到的,MIC -> AudioRecord->Encode from rawPCM to AAC -> UDP -> Decode to rawPCM -> Speaker
github.com/Audioboo/audioboo-android/blob/master/src/fm/… 很好的示例应用程序,可以在 android 的媒体 api 堆栈中完成复杂的事情。在链接类中查看“audiotrack”
【参考方案1】:
您的网络代码正在组合数据。你得到了 369 字节的压缩数据,但在接收端你最终得到了 1024 字节。这 1024 个字节由两个完整帧和一个部分帧组成。两个完整的帧每个都再次解码为 4096 字节,您看到的总共 8192 字节。一旦您向解码器发送了足够多的数据,剩余的部分帧可能会被解码,但您通常应该只向解码器发送整个帧。
此外,MediaCodec.dequeueOutputBuffer()
不仅返回(正)缓冲区索引,还返回(负)状态代码。其中一个可能的编码是MediaCodec.INFO_OUTPUT_FORMAT_CHANGED
,表示需要调用MediaCodec.getOutputFormat()
来获取音频数据的格式。即使输入是单声道,您也可能会看到编解码器输出立体声。您发布的代码在收到这些状态代码之一时会直接跳出循环。
【讨论】:
【参考方案2】:自我回答,这是我迄今为止的最大努力
package com.example.app;
import android.app.Activity;
import android.media.AudioManager;
import android.media.MediaCodecInfo;
import android.media.MediaFormat;
import android.os.Bundle;
import android.media.AudioFormat;
import android.media.AudioRecord;
import android.media.AudioTrack;
import android.media.MediaCodec;
import android.media.MediaRecorder.Audiosource;
import android.util.Log;
import java.io.IOException;
import java.net.DatagramPacket;
import java.net.DatagramSocket;
import java.net.InetAddress;
import java.net.SocketAddress;
import java.net.SocketException;
import java.nio.ByteBuffer;
public class MainActivity extends Activity
private AudioRecord recorder;
private AudioTrack player;
private MediaCodec encoder;
private MediaCodec decoder;
private short audioFormat = AudioFormat.ENCODING_PCM_16BIT;
private short channelConfig = AudioFormat.CHANNEL_IN_MONO;
private int bufferSize;
private boolean isRecording;
private boolean isPlaying;
private Thread IOrecorder;
private Thread IOudpPlayer;
private DatagramSocket ds;
private final int localPort = 39000;
@Override
protected void onCreate(Bundle savedInstanceState)
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
IOrecorder = new Thread(new Runnable()
public void run()
int read;
byte[] buffer1 = new byte[bufferSize];
ByteBuffer[] inputBuffers;
ByteBuffer[] outputBuffers;
ByteBuffer inputBuffer;
ByteBuffer outputBuffer;
MediaCodec.BufferInfo bufferInfo;
int inputBufferIndex;
int outputBufferIndex;
byte[] outData;
DatagramPacket packet;
try
encoder.start();
recorder.startRecording();
isRecording = true;
while (isRecording)
read = recorder.read(buffer1, 0, bufferSize);
// Log.d("AudioRecoder", read + " bytes read");
//------------------------
inputBuffers = encoder.getInputBuffers();
outputBuffers = encoder.getOutputBuffers();
inputBufferIndex = encoder.dequeueInputBuffer(-1);
if (inputBufferIndex >= 0)
inputBuffer = inputBuffers[inputBufferIndex];
inputBuffer.clear();
inputBuffer.put(buffer1);
encoder.queueInputBuffer(inputBufferIndex, 0, buffer1.length, 0, 0);
bufferInfo = new MediaCodec.BufferInfo();
outputBufferIndex = encoder.dequeueOutputBuffer(bufferInfo, 0);
while (outputBufferIndex >= 0)
outputBuffer = outputBuffers[outputBufferIndex];
outputBuffer.position(bufferInfo.offset);
outputBuffer.limit(bufferInfo.offset + bufferInfo.size);
outData = new byte[bufferInfo.size];
outputBuffer.get(outData);
// Log.d("AudioEncoder", outData.length + " bytes encoded");
//-------------
packet = new DatagramPacket(outData, outData.length,
InetAddress.getByName("127.0.0.1"), localPort);
ds.send(packet);
//------------
encoder.releaseOutputBuffer(outputBufferIndex, false);
outputBufferIndex = encoder.dequeueOutputBuffer(bufferInfo, 0);
// ----------------------;
encoder.stop();
recorder.stop();
catch (Exception e)
e.printStackTrace();
);
IOudpPlayer = new Thread(new Runnable()
public void run()
SocketAddress sockAddress;
String address;
int len = 1024;
byte[] buffer2 = new byte[len];
DatagramPacket packet;
byte[] data;
ByteBuffer[] inputBuffers;
ByteBuffer[] outputBuffers;
ByteBuffer inputBuffer;
ByteBuffer outputBuffer;
MediaCodec.BufferInfo bufferInfo;
int inputBufferIndex;
int outputBufferIndex;
byte[] outData;
try
player.play();
decoder.start();
isPlaying = true;
while (isPlaying)
try
packet = new DatagramPacket(buffer2, len);
ds.receive(packet);
sockAddress = packet.getSocketAddress();
address = sockAddress.toString();
// Log.d("UDP Receiver"," received !!! from " + address);
data = new byte[packet.getLength()];
System.arraycopy(packet.getData(), packet.getOffset(), data, 0, packet.getLength());
// Log.d("UDP Receiver", data.length + " bytes received");
//===========
inputBuffers = decoder.getInputBuffers();
outputBuffers = decoder.getOutputBuffers();
inputBufferIndex = decoder.dequeueInputBuffer(-1);
if (inputBufferIndex >= 0)
inputBuffer = inputBuffers[inputBufferIndex];
inputBuffer.clear();
inputBuffer.put(data);
decoder.queueInputBuffer(inputBufferIndex, 0, data.length, 0, 0);
bufferInfo = new MediaCodec.BufferInfo();
outputBufferIndex = decoder.dequeueOutputBuffer(bufferInfo, 0);
while (outputBufferIndex >= 0)
outputBuffer = outputBuffers[outputBufferIndex];
outputBuffer.position(bufferInfo.offset);
outputBuffer.limit(bufferInfo.offset + bufferInfo.size);
outData = new byte[bufferInfo.size];
outputBuffer.get(outData);
// Log.d("AudioDecoder", outData.length + " bytes decoded");
player.write(outData, 0, outData.length);
decoder.releaseOutputBuffer(outputBufferIndex, false);
outputBufferIndex = decoder.dequeueOutputBuffer(bufferInfo, 0);
//===========
catch (IOException e)
decoder.stop();
player.stop();
catch (Exception e)
);
//===========================================================
int rate = findAudioRecord();
if (rate != -1)
Log.v("=========media ", "ready: " + rate);
Log.v("=========media channel ", "ready: " + channelConfig);
boolean encoderReady = setEncoder(rate);
Log.v("=========encoder ", "ready: " + encoderReady);
if (encoderReady)
boolean decoderReady = setDecoder(rate);
Log.v("=========decoder ", "ready: " + decoderReady);
if (decoderReady)
Log.d("=======bufferSize========", "" + bufferSize);
try
setPlayer(rate);
ds = new DatagramSocket(localPort);
IOudpPlayer.start();
IOrecorder.start();
catch (SocketException e)
e.printStackTrace();
protected void onDestroy()
recorder.release();
player.release();
encoder.release();
decoder.release();
/*
protected void onResume()
// isRecording = true;
protected void onPause()
isRecording = false;
*/
private int findAudioRecord()
for (int rate : new int[]44100)
try
Log.v("===========Attempting rate ", rate + "Hz, bits: " + audioFormat + ", channel: " + channelConfig);
bufferSize = AudioRecord.getMinBufferSize(rate, channelConfig, audioFormat);
if (bufferSize != AudioRecord.ERROR_BAD_VALUE)
// check if we can instantiate and have a success
recorder = new AudioRecord(AudioSource.MIC, rate, channelConfig, audioFormat, bufferSize);
if (recorder.getState() == AudioRecord.STATE_INITIALIZED)
Log.v("===========final rate ", rate + "Hz, bits: " + audioFormat + ", channel: " + channelConfig);
return rate;
catch (Exception e)
Log.v("error", "" + rate);
return -1;
private boolean setEncoder(int rate)
encoder = MediaCodec.createEncoderByType("audio/mp4a-latm");
MediaFormat format = new MediaFormat();
format.setString(MediaFormat.KEY_MIME, "audio/mp4a-latm");
format.setInteger(MediaFormat.KEY_CHANNEL_COUNT, 1);
format.setInteger(MediaFormat.KEY_SAMPLE_RATE, rate);
format.setInteger(MediaFormat.KEY_BIT_RATE, 64 * 1024);//AAC-HE 64kbps
format.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectHE);
encoder.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
return true;
private boolean setDecoder(int rate)
decoder = MediaCodec.createDecoderByType("audio/mp4a-latm");
MediaFormat format = new MediaFormat();
format.setString(MediaFormat.KEY_MIME, "audio/mp4a-latm");
format.setInteger(MediaFormat.KEY_CHANNEL_COUNT, 1);
format.setInteger(MediaFormat.KEY_SAMPLE_RATE, rate);
format.setInteger(MediaFormat.KEY_BIT_RATE, 64 * 1024);//AAC-HE 64kbps
format.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectHE);
decoder.configure(format, null, null, 0);
return true;
private boolean setPlayer(int rate)
int bufferSizePlayer = AudioTrack.getMinBufferSize(rate, AudioFormat.CHANNEL_OUT_MONO, audioFormat);
Log.d("====buffer Size player ", String.valueOf(bufferSizePlayer));
player= new AudioTrack(AudioManager.STREAM_MUSIC, rate, AudioFormat.CHANNEL_OUT_MONO, audioFormat, bufferSizePlayer, AudioTrack.MODE_STREAM);
if (player.getState() == AudioTrack.STATE_INITIALIZED)
return true;
else
return false;
【讨论】:
【参考方案3】:经过测试,这是我通过修改您的代码得出的结论:
package com.example.app;
import android.app.Activity;
import android.media.AudioManager;
import android.media.MediaCodecInfo;
import android.media.MediaFormat;
import android.os.Bundle;
import android.media.AudioFormat;
import android.media.AudioRecord;
import android.media.AudioTrack;
import android.media.MediaCodec;
import android.media.MediaRecorder.AudioSource;
import android.util.Log;
import java.io.IOException;
import java.net.DatagramPacket;
import java.net.DatagramSocket;
import java.net.InetAddress;
import java.net.SocketAddress;
import java.net.SocketException;
import java.nio.ByteBuffer;
public class MainActivity extends Activity
private AudioRecord recorder;
private AudioTrack player;
private MediaCodec encoder;
private MediaCodec decoder;
private short audioFormat = AudioFormat.ENCODING_PCM_16BIT;
private short channelConfig = AudioFormat.CHANNEL_IN_MONO;
private int bufferSize;
private boolean isRecording;
private boolean isPlaying;
private Thread IOrecorder;
private Thread IOudpPlayer;
private DatagramSocket ds;
private final int localPort = 39000;
@Override
protected void onCreate(Bundle savedInstanceState)
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
IOrecorder = new Thread(new Runnable()
public void run()
int read;
byte[] buffer1 = new byte[bufferSize];
ByteBuffer[] inputBuffers;
ByteBuffer[] outputBuffers;
ByteBuffer inputBuffer;
ByteBuffer outputBuffer;
MediaCodec.BufferInfo bufferInfo;
int inputBufferIndex;
int outputBufferIndex;
byte[] outData;
DatagramPacket packet;
try
encoder.start();
recorder.startRecording();
isRecording = true;
while (isRecording)
read = recorder.read(buffer1, 0, bufferSize);
// Log.d("AudioRecoder", read + " bytes read");
//------------------------
inputBuffers = encoder.getInputBuffers();
outputBuffers = encoder.getOutputBuffers();
inputBufferIndex = encoder.dequeueInputBuffer(-1);
if (inputBufferIndex >= 0)
inputBuffer = inputBuffers[inputBufferIndex];
inputBuffer.clear();
inputBuffer.put(buffer1);
encoder.queueInputBuffer(inputBufferIndex, 0, buffer1.length, 0, 0);
bufferInfo = new MediaCodec.BufferInfo();
outputBufferIndex = encoder.dequeueOutputBuffer(bufferInfo, 0);
while (outputBufferIndex >= 0)
outputBuffer = outputBuffers[outputBufferIndex];
outputBuffer.position(bufferInfo.offset);
outputBuffer.limit(bufferInfo.offset + bufferInfo.size);
outData = new byte[bufferInfo.size];
outputBuffer.get(outData);
// Log.d("AudioEncoder ", outData.length + " bytes encoded");
//-------------
packet = new DatagramPacket(outData, outData.length,
InetAddress.getByName("127.0.0.1"), localPort);
ds.send(packet);
//------------
encoder.releaseOutputBuffer(outputBufferIndex, false);
outputBufferIndex = encoder.dequeueOutputBuffer(bufferInfo, 0);
// ----------------------;
encoder.stop();
recorder.stop();
catch (Exception e)
e.printStackTrace();
);
IOudpPlayer = new Thread(new Runnable()
public void run()
SocketAddress sockAddress;
String address;
int len = 2048
byte[] buffer2 = new byte[len];
DatagramPacket packet;
byte[] data;
ByteBuffer[] inputBuffers;
ByteBuffer[] outputBuffers;
ByteBuffer inputBuffer;
ByteBuffer outputBuffer;
MediaCodec.BufferInfo bufferInfo;
int inputBufferIndex;
int outputBufferIndex;
byte[] outData;
try
player.play();
decoder.start();
isPlaying = true;
while (isPlaying)
try
packet = new DatagramPacket(buffer2, len);
ds.receive(packet);
sockAddress = packet.getSocketAddress();
address = sockAddress.toString();
// Log.d("UDP Receiver"," received !!! from " + address);
data = new byte[packet.getLength()];
System.arraycopy(packet.getData(), packet.getOffset(), data, 0, packet.getLength());
// Log.d("UDP Receiver", data.length + " bytes received");
//===========
inputBuffers = decoder.getInputBuffers();
outputBuffers = decoder.getOutputBuffers();
inputBufferIndex = decoder.dequeueInputBuffer(-1);
if (inputBufferIndex >= 0)
inputBuffer = inputBuffers[inputBufferIndex];
inputBuffer.clear();
inputBuffer.put(data);
decoder.queueInputBuffer(inputBufferIndex, 0, data.length, 0, 0);
bufferInfo = new MediaCodec.BufferInfo();
outputBufferIndex = decoder.dequeueOutputBuffer(bufferInfo, 0);
while (outputBufferIndex >= 0)
outputBuffer = outputBuffers[outputBufferIndex];
outputBuffer.position(bufferInfo.offset);
outputBuffer.limit(bufferInfo.offset + bufferInfo.size);
outData = new byte[bufferInfo.size];
outputBuffer.get(outData);
// Log.d("AudioDecoder", outData.length + " bytes decoded");
player.write(outData, 0, outData.length);
decoder.releaseOutputBuffer(outputBufferIndex, false);
outputBufferIndex = decoder.dequeueOutputBuffer(bufferInfo, 0..
【讨论】:
谢谢,您能否添加说明以说明您修改的位置和方式。 答案中的来源似乎被截断了。您能否详细说明需要更改的内容和位置? 上面的代码仍然不尊重MediaCodec.dequeueOutputBuffer
的返回值。它应该检查它是否返回任何特殊状态代码,并做出相应的反应(例如,获取新的缓冲区或更改格式)。没有这个,代码肯定会在某些设备上失败。
它在encoder.queueInputBuffer(inputBufferIndex, 0, buffer1.length, 0, 0);
上提供java.lang.IllegalArgumentException
。解决办法是什么?【参考方案4】:
我已经用你的源进行了测试。 有几点。
比特率是 K 的自然数,但不是计算机 K。 64k = 64000,但不是 64 * 1024
不建议编写共享某些变量的长代码。 A. 将 Encoder Thread 和 Decoder Thread 分成 2 个独立的类。 B. DatagramSocket 是 Sender 和 Receiver 共享的,不好。
枚举音频格式需要更多值。 即采样率应选自:8000、11025、22050、44100
【讨论】:
【参考方案5】:我已经尝试了上面的代码,但它不能正常工作。我在解码输出中注入了很多静默。问题是没有为解码器设置正确的“csd”值。
因此,如果您在日志中看到“静音”或解码器抛出错误,请确保您已将以下内容添加到您的媒体解码器格式中
int profile = 2; //AAC LC
int freqIdx = 11; //8KHz
int chanCfg = 1; //Mono
ByteBuffer csd = ByteBuffer.allocate(2);
csd.put(0, (byte) (profile << 3 | freqIdx >> 1));
csd.put(1, (byte)((freqIdx & 0x01) << 7 | chanCfg << 3));
mediaFormat.setByteBuffer("csd-0", csd);
【讨论】:
我试过你的。解码后的 AAC 数据可用于AudioTrack.write(byte[] audioData, int offsetInBytes, int sizeInBytes)
。但是音频会跳过音频的最后一秒,如果调用AudioTrack.write()
,则会播放上一次运行的音频的最后一秒,这很奇怪。【参考方案6】:
D/AudioRecoder﹕读取 4096 字节 D/AudioEncoder﹕360字节编码 D/UDP Receiver:接收到!!!从 /127.0.0.1:39000 D/UDP Receiver:收到360字节 D/AudioDecoder:解码8192字节
这是因为 acc 解码器总是解码为立体声通道,即使编码数据是 MONO。所以如果你的编码端设置为立体声通道,它会像:
D/AudioRecoder﹕读取 8192 字节 D/AudioEncoder﹕360字节编码 D/UDP Receiver:接收到!!!从 /127.0.0.1:39000 D/UDP Receiver:收到360字节 D/AudioDecoder:解码8192字节
【讨论】:
AFAIK,单声道 wav 与立体声 wav 具有不同的数据库元数据。以上是关于PCM -> AAC(编码器)-> PCM(解码器)实时正确优化的主要内容,如果未能解决你的问题,请参考以下文章
如何使用 FFmpeg (C/C++) 将原始 pcm_f32le 音频编码为 AAC 编码音频?