将 WAV 录制到 IBM Watson Speech-To-Text

Posted

技术标签:

【中文标题】将 WAV 录制到 IBM Watson Speech-To-Text【英文标题】:Recording WAV to IBM Watson Speech-To-Text 【发布时间】:2018-01-01 11:26:15 【问题描述】:

我正在尝试录制音频并立即将其发送到 IBM Watson Speech-To-Text 进行转录。我已经使用从磁盘加载的 WAV 文件测试了 Watson,并且成功了。另一方面,我还测试了从麦克风录音并将其存储到磁盘,效果也很好。

但是当我尝试使用 NAudio WaveIn 录制音频时,Watson 的结果是空的,好像没有音频一样。

谁能对此有所启发,或者有人有一些想法?

private async void StartHere()

    var ws = new ClientWebSocket();
    ws.Options.Credentials = new NetworkCredential("*****", "*****");

    await ws.ConnectAsync(new Uri("wss://stream.watsonplatform.net/speech-to-text/api/v1/recognize?model=en-US_NarrowbandModel"), CancellationToken.None);

    Task.WaitAll(ws.SendAsync(openingMessage, WebSocketMessageType.Text, true, CancellationToken.None), HandleResults(ws));

    Record();


public void Record()

    var waveIn = new WaveInEvent
    
        BufferMilliseconds = 50,
        DeviceNumber       = 0,
        WaveFormat         = format
    ;

    waveIn.DataAvailable    += new EventHandler(WaveIn_DataAvailable);
    waveIn.RecordingStopped += new EventHandler(WaveIn_RecordingStopped);
    waveIn.StartRecording();


public void Stop() 

    await ws.SendAsync(closingMessage, WebSocketMessageType.Text, true, CancellationToken.None);


public void Close()

    ws.CloseAsync(WebSocketCloseStatus.NormalClosure, "Close", CancellationToken.None).Wait();


private void WaveIn_DataAvailable(object sender, WaveInEventArgs e)

    await ws.SendAsync(new ArraySegment(e.Buffer), WebSocketMessageType.Binary, true, CancellationToken.None);


private async Task HandleResults(ClientWebSocket ws)

    var buffer = new byte[1024];

    while (true)
    
        var segment = new ArraySegment(buffer);
        var result = await ws.ReceiveAsync(segment, CancellationToken.None);

        if (result.MessageType == WebSocketMessageType.Close)
        
            return;
        

        int count = result.Count;
        while (!result.EndOfMessage)
        
            if (count >= buffer.Length)
            
                await ws.CloseAsync(WebSocketCloseStatus.InvalidPayloadData, "That's too long", CancellationToken.None);
                return;
            

            segment = new ArraySegment(buffer, count, buffer.Length - count);
            result = await ws.ReceiveAsync(segment, CancellationToken.None);
            count += result.Count;
        

        var message = Encoding.UTF8.GetString(buffer, 0, count);

        // you'll probably want to parse the JSON into a useful object here,
        // see ServiceState and IsDelimeter for a light-weight example of that.
        Console.WriteLine(message);

        if (IsDelimeter(message))
        
            return;
        
    


private bool IsDelimeter(String json)

    MemoryStream stream = new MemoryStream(Encoding.UTF8.GetBytes(json));
    DataContractJsonSerializer ser = new DataContractJsonSerializer(typeof(ServiceState));
    ServiceState obj = (ServiceState) ser.ReadObject(stream);

    return obj.state == "listening";


[DataContract]
internal class ServiceState

    [DataMember]
    public string state = "";



编辑:我也尝试在 StartRecording 之前发送 WAV“标题”,像这样

    waveIn.DataAvailable    += new EventHandler(WaveIn_DataAvailable);
    waveIn.RecordingStopped += new EventHandler(WaveIn_RecordingStopped);

    /* Send WAV "header" first */
    using (var stream = new MemoryStream())
    
        using (var writer = new BinaryWriter(stream, Encoding.UTF8))
        
            writer.Write(Encoding.UTF8.GetBytes("RIFF"));
            writer.Write(0); // placeholder
            writer.Write(Encoding.UTF8.GetBytes("WAVE"));
            writer.Write(Encoding.UTF8.GetBytes("fmt "));

            format.Serialize(writer);

            if (format.Encoding != WaveFormatEncoding.Pcm && format.BitsPerSample != 0)
            
                writer.Write(Encoding.UTF8.GetBytes("fact"));
                writer.Write(4);
                writer.Write(0);
            

            writer.Write(Encoding.UTF8.GetBytes("data"));
            writer.Write(0);
            writer.Flush();
        

        byte[] header = stream.ToArray();

        await ws.SendAsync(new ArraySegment(header), WebSocketMessageType.Binary, true, CancellationToken.None);
    
    /* End WAV header */

    waveIn.StartRecording();

【问题讨论】:

【参考方案1】:

经过大约 20 小时的反复试验,我找到了解决方案,我创建了一个 GitHub Gist,因为它可能对其他人很方便。见https://gist.github.com/kboek/20476c2a03b5e9188edebaace74f9a07

【讨论】:

感谢您的解决方案。是否有助于使用麦克风录制音频并立即将其发送到 IBM Watson Speech-To-Text 而无需在本地保存? 这是 3 年前;不幸的是,我不记得这个项目的细节。但是您应该能够使用 WaveInEvent 从您的麦克风中捕获音频。我敢肯定有一些例子可以解释如何使用 NAudio 从麦克风录音。 如果可能的话,你能在这里支持吗? ***.com/questions/63654946/…

以上是关于将 WAV 录制到 IBM Watson Speech-To-Text的主要内容,如果未能解决你的问题,请参考以下文章

使用 Java SDK 将音频从麦克风流式传输到 IBM Watson SpeechToText Web 服务

IBM Watson 与 instagram 的集成

如何检测是不是在语音到文本(Unity IBM Watson sdk)中完成了句子检测?

无法将 Seaborn 正确导入我的 IBM Watson-Studio

将 IBM Db2 连接到 Watson Assistant

Curl 文本到语音中的 SSML 代码 IBM Watson