由于线程冲突 Qt C++,Google 语音识别不起作用

Posted

技术标签:

【中文标题】由于线程冲突 Qt C++,Google 语音识别不起作用【英文标题】:Google Speech Recognition doesn't work because of colliding threads Qt C++ 【发布时间】:2018-07-02 15:19:21 【问题描述】:

我在我的 Qt C++ 应用程序中使用 Google 的 Speech-To-Text API。

Google's C++ documentation 有帮助,但在一定程度上。

在我下面的代码中,如果我取消注释

std::this_thread::sleep_for(std::chrono::seconds(1));

语音识别工作正常,但不正确 - 它跳过了一些单词。但是没有这条线,它根本不起作用。我认为这是因为 MicrophoneThreadMain() 的 while 循环与 start_speech_to_text() 的 while 循环相冲突。但我不确定。

我希望这两个函数同时运行,没有中断,也没有延迟。 我尝试使用 QThreads 和 Signal and Slots 但无法正常工作。

speech_to_text.cpp

#include "speechtotext.h"

using google::cloud::speech::v1::StreamingRecognitionConfig;
using google::cloud::speech::v1::RecognitionConfig;
using google::cloud::speech::v1::Speech;
using google::cloud::speech::v1::StreamingRecognizeRequest;
using google::cloud::speech::v1::StreamingRecognizeResponse;

SpeechToText::SpeechToText(QObject *parent) : QObject(parent)




void SpeechToText::initialize()

    QAudioFormat qtFormat;

    // Get default audio input device
    QAudioDeviceInfo qtInfo = QAudioDeviceInfo::defaultInputDevice();

    // Set the audio format settings
    qtFormat.setCodec("audio/pcm");
    qtFormat.setByteOrder(QAudioFormat::Endian::LittleEndian);
    qtFormat.setChannelCount(1);
    qtFormat.setSampleRate(16000);
    qtFormat.setSampleSize(16);
    qtFormat.setSampleType(QAudioFormat::SignedInt);

    // Check whether the format is supported
    if (!qtInfo.isFormatSupported(qtFormat)) 
        qWarning() << "Default format is not supported";
        exit(3);
    

    // Instantiate QAudioInput with the settings
    audioInput = new QAudioInput(qtFormat);

    // Start receiving data from audio input
    ioDevice = audioInput->start();

    emit finished_initializing();


void SpeechToText::MicrophoneThreadMain(grpc::ClientReaderWriterInterface<StreamingRecognizeRequest,
                                      StreamingRecognizeResponse> *streamer)

    StreamingRecognizeRequest request;
    std::size_t size_read;

    while(true)
    
        audioDataBuffer.append(ioDevice->readAll());
        size_read = audioDataBuffer.size();
        // And write the chunk to the stream.
        request.set_audio_content(&audioDataBuffer.data()[0], size_read);
        std::cout << "Sending " << size_read / 1024 << "k bytes." << std::endl;
        streamer->Write(request);
        //std::this_thread::sleep_for(std::chrono::seconds(1));
    


void SpeechToText::start_speech_to_text()

    StreamingRecognizeRequest request;

    auto *streaming_config   = request.mutable_streaming_config();
    RecognitionConfig *recognition_config = new RecognitionConfig();

    recognition_config->set_language_code("en-US");
    recognition_config->set_sample_rate_hertz(16000);
    recognition_config->set_encoding(RecognitionConfig::LINEAR16);
    streaming_config->set_allocated_config(recognition_config);

    // Create a Speech Stub connected to the speech service.
    auto creds = grpc::GoogleDefaultCredentials();
    auto channel = grpc::CreateChannel("speech.googleapis.com", creds);
    std::unique_ptr<Speech::Stub> speech(Speech::NewStub(channel));

    // Begin a stream.
    grpc::ClientContext context;
    auto streamer = speech->StreamingRecognize(&context);

    // Write the first request, containing the config only.
    streaming_config->set_interim_results(true);
    streamer->Write(request);

    // The microphone thread writes the audio content.
    std::thread microphone_thread(&SpeechToText::MicrophoneThreadMain, this, streamer.get());

    // Read responses.
    StreamingRecognizeResponse response;
    while (streamer->Read(&response)) // Returns false when no more to read.
    
        // Dump the transcript of all the results.
        for (int r = 0; r < response.results_size(); ++r)
        
            auto result = response.results(r);
            std::cout << "Result stability: " << result.stability() << std::endl;
            for (int a = 0; a < result.alternatives_size(); ++a)
            
                auto alternative = result.alternatives(a);
                std::cout << alternative.confidence() << "\t"
                        << alternative.transcript() << std::endl;
            
        
    

    grpc::Status status = streamer->Finish();
    microphone_thread.join();
    if (!status.ok()) 
      // Report the RPC failure.
      qDebug() << "error RPC";
      std::cerr << status.error_message() << std::endl;
    

speech_to_text.h

#ifndef SPEECHTOTEXT_H
#define SPEECHTOTEXT_H

#include <QObject>
#include <QDebug>
#include <QThread>

#include <thread>
#include <chrono>
#include <fstream>
#include <iostream>
#include <iterator>
#include <string>
#include <functional>

#include <QtMultimedia>
#include <QtMultimedia/QAudioInput>
#include <QAudioDeviceInfo>
#include <QAudioFormat>
#include <QIODevice>
#include <QtConcurrent>
#include <QMutex>

#include <grpc++/grpc++.h>
#include "google/cloud/speech/v1/cloud_speech.grpc.pb.h"

using google::cloud::speech::v1::StreamingRecognitionConfig;
using google::cloud::speech::v1::RecognitionConfig;
using google::cloud::speech::v1::Speech;
using google::cloud::speech::v1::StreamingRecognizeRequest;
using google::cloud::speech::v1::StreamingRecognizeResponse;

class SpeechToText : public QObject

    Q_OBJECT
public:
    explicit SpeechToText(QObject *parent = nullptr);

signals:
    void finished_initializing();
    void finished_speech_to_text(QString);

public slots:
    void initialize();
    void start_speech_to_text();

private:
    void MicrophoneThreadMain(grpc::ClientReaderWriterInterface<StreamingRecognizeRequest,
                                          StreamingRecognizeResponse> *);

    QAudioInput *audioInput;
    QIODevice *ioDevice;
    QByteArray audioDataBuffer;
;

#endif // SPEECHTOTEXT_H

你知道如何解决这个问题吗?

【问题讨论】:

另外,您提供的链接中的谷歌代码在写入之后调用WritesDone,当没有什么可写的时候。你确定你总是在写东西,即 size_read 永远不会 0 吗? 如果您丢失了一些单词,可能是因为音频缓冲区太小:也许使用带有“足够大缓冲区”的 setBuffer 会有所帮助,例如:forum.qt.io/topic/71129/voip-qtcpsoket-audio-streaming/5 @iMajuscule 感谢您的信息。 GDPR 的答案似乎不那么骇人听闻,所以我会先研究一下。 【参考方案1】:

我在这里发布我的问题的解决方案。感谢 @allquixotic 提供的所有有用信息。

在 mainwindow.cpp 中

void MainWindow::setUpMicrophoneRecorder()

    microphone_thread = new QThread(this);
    microphone_recorder_engine.moveToThread(microphone_thread);

    connect(microphone_thread, SIGNAL(started()), &microphone_recorder_engine, SLOT(start_listen()));
    connect(&microphone_recorder_engine, &MicrophoneRecorder::microphone_data_raw,
            this, [this] (const QByteArray &data) 
        this->speech_to_text_engine.listen(data);
    );

    microphone_thread->start();


void MainWindow::setUpSpeechToTextEngine()

    speech_to_text_thread = new QThread(this);
    speech_to_text_engine.moveToThread(speech_to_text_thread);

    connect(speech_to_text_thread, SIGNAL(started()), &speech_to_text_engine, SLOT(initialize()));
    connect(&speech_to_text_engine, SIGNAL(finished_speech_to_text(QString)), this, SLOT(process_user_input(QString)));

    speech_to_text_thread->start();

microphonerecorder.h

#ifndef MICROPHONERECORDER_H
#define MICROPHONERECORDER_H

#include <QObject>
#include <QByteArray>
#include <QDebug>
#include <QtMultimedia>
#include <QtMultimedia/QAudioInput>
#include <QAudioDeviceInfo>
#include <QAudioFormat>
#include <QIODevice>

class MicrophoneRecorder : public QObject

    Q_OBJECT
public:
    explicit MicrophoneRecorder(QObject *parent = nullptr);

signals:
    void microphone_data_raw(const QByteArray &);

public slots:
    void start_listen();

private slots:
    void listen(const QByteArray &);

private:
    QAudioInput *audioInput;
    QIODevice *ioDevice;
    QByteArray audioDataBuffer;
;

#endif // MICROPHONERECORDER_H

microphonerecorder.cpp

#include "microphonerecorder.h"

MicrophoneRecorder::MicrophoneRecorder(QObject *parent) : QObject(parent)




void MicrophoneRecorder::listen(const QByteArray &audioData)

    emit microphone_data_raw(audioData);


void MicrophoneRecorder::start_listen()

    QAudioFormat qtFormat;

    // Get default audio input device
    QAudioDeviceInfo qtInfo = QAudioDeviceInfo::defaultInputDevice();

    // Set the audio format settings
    qtFormat.setCodec("audio/pcm");
    qtFormat.setByteOrder(QAudioFormat::Endian::LittleEndian);
    qtFormat.setChannelCount(1);
    qtFormat.setSampleRate(16000);
    qtFormat.setSampleSize(16);
    qtFormat.setSampleType(QAudioFormat::SignedInt);

    // Check whether the format is supported
    if (!qtInfo.isFormatSupported(qtFormat)) 
        qWarning() << "Default format is not supported";
        exit(3);
    

    // Instantiate QAudioInput with the settings
    audioInput = new QAudioInput(qtFormat);

    // Start receiving data from audio input
    ioDevice = audioInput->start();

    // Listen to the received data for wake words
    QObject::connect(ioDevice, &QIODevice::readyRead, [=] 
        listen(ioDevice->readAll());
    );

speechtotext.h

#ifndef SPEECHTOTEXT_H
#define SPEECHTOTEXT_H

#include <QObject>
#include <QDebug>
#include <QThread>
#include <QDateTime>

#include <thread>
#include <chrono>
#include <string>

#include <QtMultimedia>
#include <QtMultimedia/QAudioInput>
#include <QAudioDeviceInfo>
#include <QAudioFormat>
#include <QIODevice>
#include <QtConcurrent>
#include <QMutex>

#include <grpc++/grpc++.h>
#include "google/cloud/speech/v1/cloud_speech.grpc.pb.h"

using google::cloud::speech::v1::StreamingRecognitionConfig;
using google::cloud::speech::v1::RecognitionConfig;
using google::cloud::speech::v1::Speech;
using google::cloud::speech::v1::StreamingRecognizeRequest;
using google::cloud::speech::v1::StreamingRecognizeResponse;

class SpeechToText : public QObject

    Q_OBJECT
public:
    explicit SpeechToText(QObject *parent = nullptr);

signals:
    void finished_initializing();
    void in_speech_to_text();
    void out_of_speech_to_text();
    void finished_speech_to_text(QString);

public slots:
    void initialize();
    void listen(const QByteArray &);
    void start_speech_to_text();

private:
    void MicrophoneThreadMain(grpc::ClientReaderWriterInterface<StreamingRecognizeRequest,
                                          StreamingRecognizeResponse> *);
    void StreamerThread(grpc::ClientReaderWriterInterface<StreamingRecognizeRequest,
                                          StreamingRecognizeResponse> *);

    QByteArray audioDataBuffer;
    int m_start_time;
;

#endif // SPEECHTOTEXT_H

speechtotext.cpp

#include "speechtotext.h"

using google::cloud::speech::v1::StreamingRecognitionConfig;
using google::cloud::speech::v1::RecognitionConfig;
using google::cloud::speech::v1::Speech;
using google::cloud::speech::v1::StreamingRecognizeRequest;
using google::cloud::speech::v1::StreamingRecognizeResponse;

SpeechToText::SpeechToText(QObject *parent) : QObject(parent)




void SpeechToText::initialize()

    emit finished_initializing();


void SpeechToText::MicrophoneThreadMain(grpc::ClientReaderWriterInterface<StreamingRecognizeRequest,
                                        StreamingRecognizeResponse> *streamer)

    StreamingRecognizeRequest request;
    std::size_t size_read;
    while (time(0) - m_start_time <= TIME_RECOGNITION)
    
        int chunk_size = 64 * 1024;
        if (audioDataBuffer.size() >= chunk_size)
        
            QByteArray bytes_read = QByteArray(audioDataBuffer);
            size_read = std::size_t(bytes_read.size());

            // And write the chunk to the stream.
            request.set_audio_content(&bytes_read.data()[0], size_read);

            bool ok = streamer->Write(request);
            /*if (ok)
            
                std::cout << "Sending " << size_read / 1024 << "k bytes." << std::endl;
            */

            audioDataBuffer.clear();
            audioDataBuffer.resize(0);
        
        std::this_thread::sleep_for(std::chrono::milliseconds(50));
    

    qDebug() << "Out of speech recognition: " << end_date;

    emit out_of_speech_to_text();

    streamer->WritesDone();


void SpeechToText::StreamerThread(grpc::ClientReaderWriterInterface<StreamingRecognizeRequest,
                                      StreamingRecognizeResponse> *streamer)

    // Read responses.
    StreamingRecognizeResponse response;

    while (time(0) - m_start_time <= TIME_RECOGNITION)
    
        if(streamer->Read(&response)) // Returns false when no more to read.
        
            // Dump the transcript of all the results.
            if (response.results_size() > 0)
            
                auto result = response.results(0);
                if (result.alternatives_size() > 0)
                
                    auto alternative = result.alternatives(0);
                    auto transcript = QString::fromStdString(alternative.transcript());
                    if (result.is_final())
                    
                        qDebug() << "Speech recognition: " << transcript;

                        emit finished_speech_to_text(transcript);
                    
                
            
        
    


void SpeechToText::listen(const QByteArray &audioData)

    audioDataBuffer.append(audioData);


void SpeechToText::start_speech_to_text()

    qDebug() << "in start_speech_to_text: " << start_date;

    emit in_speech_to_text();

    m_start_time = time(0);
    audioDataBuffer.clear();
    audioDataBuffer.resize(0);

    StreamingRecognizeRequest request;

    auto *streaming_config   = request.mutable_streaming_config();
    RecognitionConfig *recognition_config = new RecognitionConfig();

    recognition_config->set_language_code("en-US");
    recognition_config->set_sample_rate_hertz(16000);
    recognition_config->set_encoding(RecognitionConfig::LINEAR16);
    streaming_config->set_allocated_config(recognition_config);

    // Create a Speech Stub connected to the speech service.
    auto creds = grpc::GoogleDefaultCredentials();
    auto channel = grpc::CreateChannel("speech.googleapis.com", creds);
    std::unique_ptr<Speech::Stub> speech(Speech::NewStub(channel));

    // Begin a stream.
    grpc::ClientContext context;
    auto streamer = speech->StreamingRecognize(&context);

    // Write the first request, containing the config only.
    streaming_config->set_interim_results(true);
    streamer->Write(request);

    // The microphone thread writes the audio content.
    std::thread microphone_thread(&SpeechToText::MicrophoneThreadMain, this, streamer.get());
    std::thread streamer_thread(&SpeechToText::StreamerThread, this, streamer.get());

    microphone_thread.join();
    streamer_thread.join();

【讨论】:

您好,我正在使用您的解决方案,但您能帮我如何构建 google API 吗?【参考方案2】: 您真的应该效仿 Google 的示例,一次只执行 64k。 当您打算将请求发送到 Google 的服务器时,应在 streamer 上使用 WritesDone()。 您似乎从来没有清除过 QByteArray 的数据,因此随着时间的推移,每次连续调用 QByteArray 的append 都会堆积起来。由于您使用的是指向底层数组中数据的第一个元素的指针,因此每次运行循环时,您都将在该点之前捕获的整个音频数据发送到streamer。我建议使用嵌套循环重复调用QIODevice::read(char *data, qint64 maxSize),直到您的QByteArray 正好有64KB。您需要处理表示流结束的返回值 -1,并根据需要多少数据才能将数组填充到 64k,向下调整 maxSize。对 Google API 的请求数据太少(例如,您的当前循环一开始似乎只有几个字节)可能会限制您的速率,或者由于高协议开销与数据的比率而在 Internet 连接上造成上游拥塞。此外,使用固定大小(64k)的普通 C 样式数组而不是 QByteArray 可能更容易处理此问题,因为您不需要调整大小,并且 AFAIK QByteArray::clear() 可能会导致内存分配(对性能不利)。为避免在短时间写入时重新发送旧数据(例如,当麦克风流在 64k 缓冲区满之前关闭时),您还应该在每次 ClientReaderWriterInterface::WritesDone() 调用之后 memset(array, 0, sizeof array);。 如果网络无法跟上传入的麦克风数据,您最终可能会在QAudioInput 上出现溢出情况,它会耗尽本地缓冲区来存储音频。更多的缓冲会降低这种可能性,但也会降低响应能力。您可能只想将来自QAudioInput 的所有数据缓冲到无限的QByteArray 中,并一次读取那个 64k(您可以通过将其包装在@ 987654334@ 和你在MicrophoneThreadMain() 中处理QIODevice 的所有代码将是兼容的。)我认为,通常,对于像你这样的项目,用户宁愿有更差的响应能力,而不是不得不重复自己,以防万一网络相关的溢出。但是可能有一个阈值 - 可能是 5 秒左右 - 在此之后缓冲的数据可能会变得“过时”,因为用户可能会再次尝试对着麦克风讲话,从而导致多个 STT 事件在上游快速连续发生的奇怪效果瓶颈得以释放。

【讨论】:

非常感谢您的详细回复,我会尽力落实您的建议。一次只发送 64k,语音识别就可以很好地工作了。我只需要弄清楚你提到的所有其他重要的东西。问题是我不擅长 C 类型的变量,但我会尽力而为。我将在此处发布我的解决方案和代码,以便其他人受益。

以上是关于由于线程冲突 Qt C++,Google 语音识别不起作用的主要内容,如果未能解决你的问题,请参考以下文章

使用百度语音识别(linux c++ SDK)的踩坑

好的语音识别API

Android 语音识别与 Text to Speech 冲突

《QT语音》修改快捷键方法介绍

在 Actions on Google 应用上设置语音识别上下文?

是否有适用于 Google 语音识别技术的 API? [关闭]