在 Java 中同时播放多个字节数组

Posted

技术标签:

【中文标题】在 Java 中同时播放多个字节数组【英文标题】:Playing multiple byte arrays simultaneously in Java 【发布时间】:2014-10-08 20:12:05 【问题描述】:

如何同时播放多个(音频)字节数组?这个“字节数组”由 TargetDataLine 记录,使用服务器传输。

到目前为止我所做的尝试

使用 SourceDataLine:

没有办法使用 SourceDataLine 播放多个流,因为 write 方法会阻塞,直到缓冲区被写入。使用线程无法解决此问题,因为只有一个 SourceDataLine 可以并发写入。

使用 AudioPlayer 类:

ByteInputStream stream2 = new ByteInputStream(data, 0, data.length);
AudioInputStream stream = new AudioInputStream(stream2, VoiceChat.format, data.length);
AudioPlayer.player.start(stream);

这只会对客户端产生噪音。

编辑 我没有同时收到语音包,不是同时,更“重叠”。

【问题讨论】:

我猜你需要使用 Mixer 将两条(或更多)行混合在一起。 是的,这将是完美的,但如何?我没有找到...的教程 【参考方案1】:

显然 Java 的 Mixer 接口不是为此而设计的。

http://docs.oracle.com/javase/7/docs/api/javax/sound/sampled/Mixer.html:

混音器是一种带有一条或多条线路的音频设备。它不必是 专为混合音频信号而设计。

确实,当我尝试在同一个混音器上打开多条线路时,这会以LineUnavailableException 失败。但是,如果您所有的录音都具有相同的音频格式,则很容易手动将它们混合在一起。例如,如果您有 2 个输入:

    将两者都转换为适当的数据类型(例如,byte[] 用于 8 位音频,short[] 用于 16 位,float[] 用于 32 位浮点等) 将它们相加到另一个数组中。确保总和值不超过数据类型的范围。 将输出转换回字节并将其写入SourceDataLine

另见How is audio represented with numbers?

这是一个混合 2 个录音并输出为 1 个信号的示例,全部采用 16 位 48Khz 立体声。

    // print all devices (both input and output)
    int i = 0;
    Mixer.Info[] infos = Audiosystem.getMixerInfo();
    for (Mixer.Info info : infos)
        System.out.println(i++ + ": " + info.getName());

    // select 2 inputs and 1 output
    System.out.println("Select input 1: ");
    int in1Index = Integer.parseInt(System.console().readLine());
    System.out.println("Select input 2: ");
    int in2Index = Integer.parseInt(System.console().readLine());
    System.out.println("Select output: ");
    int outIndex = Integer.parseInt(System.console().readLine());

    // ugly java sound api stuff
    try (Mixer in1Mixer = AudioSystem.getMixer(infos[in1Index]);
            Mixer in2Mixer = AudioSystem.getMixer(infos[in2Index]);
            Mixer outMixer = AudioSystem.getMixer(infos[outIndex])) 
        in1Mixer.open();
        in2Mixer.open();
        outMixer.open();
        try (TargetDataLine in1Line = (TargetDataLine) in1Mixer.getLine(in1Mixer.getTargetLineInfo()[0]);
                TargetDataLine in2Line = (TargetDataLine) in2Mixer.getLine(in2Mixer.getTargetLineInfo()[0]);
                SourceDataLine outLine = (SourceDataLine) outMixer.getLine(outMixer.getSourceLineInfo()[0])) 

            // audio format 48khz 16 bit stereo (signed litte endian)
            AudioFormat format = new AudioFormat(48000.0f, 16, 2, true, false);

            // 4 bytes per frame (16 bit samples stereo)
            int frameSize = 4;
            int bufferSize = 4800;
            int bufferBytes = frameSize * bufferSize;

            // buffers for java audio
            byte[] in1Bytes = new byte[bufferBytes];
            byte[] in2Bytes = new byte[bufferBytes];
            byte[] outBytes = new byte[bufferBytes];

            // buffers for mixing
            short[] in1Samples = new short[bufferBytes / 2];
            short[] in2Samples = new short[bufferBytes / 2];
            short[] outSamples = new short[bufferBytes / 2];

            // how long to record & play
            int framesProcessed = 0;
            int durationSeconds = 10;
            int durationFrames = (int) (durationSeconds * format.getSampleRate());

            // open devices
            in1Line.open(format, bufferBytes);
            in2Line.open(format, bufferBytes);
            outLine.open(format, bufferBytes);
            in1Line.start();
            in2Line.start();
            outLine.start();

            // start audio loop
            while (framesProcessed < durationFrames) 

                // record audio
                in1Line.read(in1Bytes, 0, bufferBytes);
                in2Line.read(in2Bytes, 0, bufferBytes);

                // convert input bytes to samples
                ByteBuffer.wrap(in1Bytes).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(in1Samples);
                ByteBuffer.wrap(in2Bytes).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(in2Samples);

                // mix samples - lower volume by 50% since we're mixing 2 streams
                for (int s = 0; s < bufferBytes / 2; s++)
                    outSamples[s] = (short) ((in1Samples[s] + in2Samples[s]) * 0.5);

                // convert output samples to bytes
                ByteBuffer.wrap(outBytes).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().put(outSamples);

                // play audio
                outLine.write(outBytes, 0, bufferBytes);

                framesProcessed += bufferBytes / frameSize;
            

            in1Line.stop();
            in2Line.stop();
            outLine.stop();
        
    

【讨论】:

“确保将样本值除以源的数量” - 通常只使用一个常数(例如 1/3)。那是因为音频波往往会相互抵消,将信号分开会使其静音。 好收获。替换为“求和值不超过数据类型的范围”。更多讨论请参见此处dsp.stackexchange.com/questions/3581/…。 谢谢,但我不知道我是否可以使用类似的东西,因为客户端会同时收到每个玩家的“语音”数据包 :(,我认为这使得问题更复杂了,不是吗? 更复杂的是,但您仍然可以使用相同的原理。把它想象成一个永远存在的(无限的)音频流,它主要包含静音,但你偶尔会在它们进来时混入额外的声音。稍后我会尝试为你编写一个示例。 非常感谢你:D,你真棒:D【参考方案2】:

好的,我整理了一些内容,可以帮助您入门。我将在下面发布完整的代码,但我将首先尝试解释所涉及的步骤。

这里有趣的部分是创建您自己的音频“混音器”类,它允许该类的消费者在(不久的)将来的特定时间点安排音频块。特定时间点部分在这里很重要:我假设您在数据包中接收网络语音,其中每个数据包需要在前一个数据包的末尾准确开始,以便为单个语音播放连续的声音。此外,由于您说声音可以重叠,我假设(是的,很多假设)一个新声音可以通过网络进入,而一个或多个旧声音仍在播放。因此,允许从任何线程调度音频块似乎是合理的。请注意,只有一个线程实际写入数据线,只是任何线程都可以向混音器提交音频包。

所以对于提交音频包部分,我们现在有了这个:

private final ConcurrentLinkedQueue<QueuedBlock> scheduledBlocks;
public void mix(long when, short[] block) 
    scheduledBlocks.add(new QueuedBlock(when, Arrays.copyOf(block, block.length)));

QueuedBlock 类仅用于用“when”标记字节数组(音频缓冲区):应该播放块的时间点。

时间点相对于音频流的当前位置表示。每次将音频缓冲区写入数据线时,当创建流并使用缓冲区大小更新流时,它设置为零:

private final AtomicLong position = new AtomicLong();
public long position() 
    return position.get();

除了设置数据线的所有麻烦之外,混音器类的有趣部分显然是混音发生的地方。对于每个预定的音频块,它分为 3 种情况:

该块已完整播放。从 scheduleBlocks 列表中删除。 块计划在当前缓冲区之后的某个时间点开始。什么都不做。 (部分)块应混合到当前缓冲区中。请注意,块的开头可能(或可能不)已在先前的缓冲区中播放。类似地,计划块的结尾可能会超过当前缓冲区的结尾,在这种情况下,我们混合它的第一部分并将其余部分留给下一轮,直到所有的块都被播放完,整个块被删除。

另外请注意,没有可靠的方法可以立即开始播放音频数据,当您将数据包提交到混音器时,请确保始终让它们从现在开始至少 1 个音频缓冲区的持续时间,否则您将面临丢失开头的风险你的声音。这是混音代码:

    private static final double MIXDOWN_VOLUME = 1.0 / NUM_PRODUCERS;

    private final List<QueuedBlock> finished = new ArrayList<>();
    private final short[] mixBuffer = new short[BUFFER_SIZE_FRAMES * CHANNELS];
    private final byte[] audioBuffer = new byte[BUFFER_SIZE_FRAMES * CHANNELS * 2];
    private final AtomicLong position = new AtomicLong();

    Arrays.fill(mixBuffer, (short) 0);
    long bufferStartAt = position.get();
    for (QueuedBlock block : scheduledBlocks) 
        int blockFrames = block.data.length / CHANNELS;

        // block fully played - mark for deletion
        if (block.when + blockFrames <= bufferStartAt) 
            finished.add(block);
            continue;
        

        // block starts after end of current buffer
        if (bufferStartAt + BUFFER_SIZE_FRAMES <= block.when)
            continue;

        // mix in part of the block which overlaps current buffer
        int blockOffset = Math.max(0, (int) (bufferStartAt - block.when));
        int blockMaxFrames = blockFrames - blockOffset;
        int bufferOffset = Math.max(0, (int) (block.when - bufferStartAt));
        int bufferMaxFrames = BUFFER_SIZE_FRAMES - bufferOffset;
        for (int f = 0; f < blockMaxFrames && f < bufferMaxFrames; f++)
            for (int c = 0; c < CHANNELS; c++) 
                int bufferIndex = (bufferOffset + f) * CHANNELS + c;
                int blockIndex = (blockOffset + f) * CHANNELS + c;
                mixBuffer[bufferIndex] += (short)
                    (block.data[blockIndex]*MIXDOWN_VOLUME);
            
    

    scheduledBlocks.removeAll(finished);
    finished.clear();
    ByteBuffer
        .wrap(audioBuffer)
        .order(ByteOrder.LITTLE_ENDIAN)
        .asShortBuffer()
        .put(mixBuffer);
    line.write(audioBuffer, 0, audioBuffer.length);
    position.addAndGet(BUFFER_SIZE_FRAMES);

最后是一个完整的、自包含的示例,它生成多个线程,将代表随机持续时间和频率的正弦波的音频块提交给混音器(在此示例中称为 AudioConsumer)。用传入的网络数据包替换正弦波,您应该已经成功了。

package test;

import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.concurrent.ConcurrentLinkedQueue;
import java.util.concurrent.ThreadLocalRandom;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.atomic.AtomicLong;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.Line;
import javax.sound.sampled.Mixer;
import javax.sound.sampled.SourceDataLine;

public class Test 

public static final int CHANNELS = 2;
public static final int SAMPLE_RATE = 48000;
public static final int NUM_PRODUCERS = 10;
public static final int BUFFER_SIZE_FRAMES = 4800;

// generates some random sine wave
public static class ToneGenerator 

    private static final double[] NOTES = 261.63, 311.13, 392.00;
    private static final double[] OCTAVES = 1.0, 2.0, 4.0, 8.0;
    private static final double[] LENGTHS = 0.05, 0.25, 1.0, 2.5, 5.0;

    private double phase;
    private int framesProcessed;
    private final double length;
    private final double frequency;

    public ToneGenerator() 
        ThreadLocalRandom rand = ThreadLocalRandom.current();
        length = LENGTHS[rand.nextInt(LENGTHS.length)];
        frequency = NOTES[rand.nextInt(NOTES.length)] * OCTAVES[rand.nextInt(OCTAVES.length)];
    

    // make sound
    public void fill(short[] block) 
        for (int f = 0; f < block.length / CHANNELS; f++) 
            double sample = Math.sin(phase * 2.0 * Math.PI);
            for (int c = 0; c < CHANNELS; c++)
                block[f * CHANNELS + c] = (short) (sample * Short.MAX_VALUE);
            phase += frequency / SAMPLE_RATE;
        
        framesProcessed += block.length / CHANNELS;
    

    // true if length of tone has been generated
    public boolean done() 
        return framesProcessed >= length * SAMPLE_RATE;
    


// dummy audio producer, based on sinewave generator
// above but could also be incoming network packets
public static class AudioProducer 

    final Thread thread;
    final AudioConsumer consumer;
    final short[] buffer = new short[BUFFER_SIZE_FRAMES * CHANNELS];

    public AudioProducer(AudioConsumer consumer) 
        this.consumer = consumer;
        thread = new Thread(() -> run());
        thread.setDaemon(true);
    

    public void start() 
        thread.start();
    

    // repeatedly play random sine and sleep for some time
    void run() 
        try 
            ThreadLocalRandom rand = ThreadLocalRandom.current();
            while (true) 
                long pos = consumer.position();
                ToneGenerator g = new ToneGenerator();

                // if we schedule at current buffer position, first part of the tone will be
                // missed so have tone start somewhere in the middle of the next buffer
                pos += BUFFER_SIZE_FRAMES + rand.nextInt(BUFFER_SIZE_FRAMES);
                while (!g.done()) 
                    g.fill(buffer);
                    consumer.mix(pos, buffer);
                    pos += BUFFER_SIZE_FRAMES;

                    // we can generate audio faster than it's played
                    // sleep a while to compensate - this more closely
                    // corresponds to playing audio coming in over the network
                    double bufferLengthMillis = BUFFER_SIZE_FRAMES * 1000.0 / SAMPLE_RATE;
                    Thread.sleep((int) (bufferLengthMillis * 0.9));
                

                // sleep a while in between tones
                Thread.sleep(1000 + rand.nextInt(2000));
            
         catch (Throwable t) 
            System.out.println(t.getMessage());
            t.printStackTrace();
        
    


// audio consumer - plays continuously on a background
// thread, allows audio to be mixed in from arbitrary threads
public static class AudioConsumer 

    // audio block with "when to play" tag
    private static class QueuedBlock 

        final long when;
        final short[] data;

        public QueuedBlock(long when, short[] data) 
            this.when = when;
            this.data = data;
        
    

    // need not normally be so low but in this example
    // we're mixing down a bunch of full scale sinewaves
    private static final double MIXDOWN_VOLUME = 1.0 / NUM_PRODUCERS;

    private final List<QueuedBlock> finished = new ArrayList<>();
    private final short[] mixBuffer = new short[BUFFER_SIZE_FRAMES * CHANNELS];
    private final byte[] audioBuffer = new byte[BUFFER_SIZE_FRAMES * CHANNELS * 2];

    private final Thread thread;
    private final AtomicLong position = new AtomicLong();
    private final AtomicBoolean running = new AtomicBoolean(true);
    private final ConcurrentLinkedQueue<QueuedBlock> scheduledBlocks = new ConcurrentLinkedQueue<>();


    public AudioConsumer() 
        thread = new Thread(() -> run());
    

    public void start() 
        thread.start();
    

    public void stop() 
        running.set(false);
    

    // gets the play cursor. note - this is not accurate and 
    // must only be used to schedule blocks relative to other blocks
    // (e.g., for splitting up continuous sounds into multiple blocks)
    public long position() 
        return position.get();
    

    // put copy of audio block into queue so we don't
    // have to worry about caller messing with it afterwards
    public void mix(long when, short[] block) 
        scheduledBlocks.add(new QueuedBlock(when, Arrays.copyOf(block, block.length)));
    

    // better hope mixer 0, line 0 is output
    private void run() 
        Mixer.Info[] mixerInfo = AudioSystem.getMixerInfo();
        try (Mixer mixer = AudioSystem.getMixer(mixerInfo[0])) 
            Line.Info[] lineInfo = mixer.getSourceLineInfo();
            try (SourceDataLine line = (SourceDataLine) mixer.getLine(lineInfo[0])) 
                line.open(new AudioFormat(SAMPLE_RATE, 16, CHANNELS, true, false), BUFFER_SIZE_FRAMES);
                line.start();
                while (running.get())
                    processSingleBuffer(line);
                line.stop();
            
         catch (Throwable t) 
            System.out.println(t.getMessage());
            t.printStackTrace();
        
    

    // mix down single buffer and offer to the audio device
    private void processSingleBuffer(SourceDataLine line) 

        Arrays.fill(mixBuffer, (short) 0);
        long bufferStartAt = position.get();

        // mixdown audio blocks
        for (QueuedBlock block : scheduledBlocks) 

            int blockFrames = block.data.length / CHANNELS;

            // block fully played - mark for deletion
            if (block.when + blockFrames <= bufferStartAt) 
                finished.add(block);
                continue;
            

            // block starts after end of current buffer
            if (bufferStartAt + BUFFER_SIZE_FRAMES <= block.when)
                continue;

            // mix in part of the block which overlaps current buffer
            // note that block may have already started in the past
            // but extends into the current buffer, or that it starts
            // in the future but before the end of the current buffer
            int blockOffset = Math.max(0, (int) (bufferStartAt - block.when));
            int blockMaxFrames = blockFrames - blockOffset;
            int bufferOffset = Math.max(0, (int) (block.when - bufferStartAt));
            int bufferMaxFrames = BUFFER_SIZE_FRAMES - bufferOffset;
            for (int f = 0; f < blockMaxFrames && f < bufferMaxFrames; f++)
                for (int c = 0; c < CHANNELS; c++) 
                    int bufferIndex = (bufferOffset + f) * CHANNELS + c;
                    int blockIndex = (blockOffset + f) * CHANNELS + c;
                    mixBuffer[bufferIndex] += (short) (block.data[blockIndex] * MIXDOWN_VOLUME);
                
        

        scheduledBlocks.removeAll(finished);
        finished.clear();
        ByteBuffer.wrap(audioBuffer).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().put(mixBuffer);
        line.write(audioBuffer, 0, audioBuffer.length);
        position.addAndGet(BUFFER_SIZE_FRAMES);
    


public static void main(String[] args) 

    System.out.print("Press return to exit...");
    AudioConsumer consumer = new AudioConsumer();
    consumer.start();
    for (int i = 0; i < NUM_PRODUCERS; i++)
        new AudioProducer(consumer).start();
    System.console().readLine();
    consumer.stop();


【讨论】:

太棒了,非常感谢:D,我会测试一下并告诉你它是否成功。很好的评论和解释,再次感谢您:D。 所以最后,我有时间加入我的项目(抱歉,这花了我这么长时间,很忙)。我在将我的 byte[] 转换为 short[] 时遇到了一些麻烦,你能帮帮我吗? 当然,到底是什么不工作?使用 ByteBuffer,您应该能够在 byte[] 和任何原始数组之间进行转换。 哦:D,我想我必须添加一些复杂的计算来转换它。我将对此进行测试,希望它能正常工作,非常感谢:D。 "ByteBuffer.wrap(data).asShortBuffer().array()" *****,我收到此错误:java.nio.ShortBuffer.array(Unknown Source) 处的 java.lang.UnsupportedOperationException 使用此代码:“ByteBuffer.wrap(data).asShortBuffer( )。大批()”。你能帮帮我吗?【参考方案3】:

您可以使用Tritontus 库进行软件混音(它很旧,但仍然很好用)。

将依赖项添加到您的项目中:

<dependency>
    <groupId>com.googlecode.soundlibs</groupId>
    <artifactId>tritonus-all</artifactId>
    <version>0.3.7.2</version>
</dependency>

使用org.tritonus.share.sampled.FloatSampleBuffer。在调用#mix 之前,两个缓冲区必须是相同的AudioFormat

// TODO instantiate these variables with real data
byte[] audio1, audio2;
AudioFormat af1, af2;
SourceDataLine sdl = AudioSystem.getSourceDataLine(af1);

FloatSampleBuffer fsb1 = new FloatSampleBuffer(audio1, 0, audio1.length, af1.getFormat());
FloatSampleBuffer fsb2 = new FloatSampleBuffer(audio2, 0, audio2.length, af2.getFormat());

fsb1.mix(fsb2);
byte[] result = fsb1.convertToByteArray(af1);

sdl.write(result, 0, result.length); // play it

【讨论】:

以上是关于在 Java 中同时播放多个字节数组的主要内容,如果未能解决你的问题,请参考以下文章

在 Java 中,如何将字节数组转换为十六进制数字字符串,同时保持前导零? [复制]

在 Java 中,如何将字节数组转换为十六进制数字字符串,同时保持前导零? [复制]

从字节数组填充音频缓冲区并使用渲染回调播放

PCM音频实时播放:音频字节数组(16/8位)转为PCM ArrayBuffer流

如何通过 javascript/html5 播放 wav 音频字节数组?

在本机内存中存储多个字节数组