Azure Speech-To-Text 多语音识别

Posted

技术标签:

【中文标题】Azure Speech-To-Text 多语音识别【英文标题】:Azure Speech-To-Text multiple voice recognition 【发布时间】:2019-06-06 15:29:45 【问题描述】:

我正在尝试使用 Azure 的 SpeechToText 将对话音频文件转录为文本。我使用 SKD 并使用 API 进行了另一次尝试(按照此说明https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/batch/python/python-client/main.py),但我也想通过不同的声音分割结果文本。有可能吗?

我知道它在 beta 版的对话服务中可用,但由于我的音频是西班牙语,我无法使用它。是否有按扬声器分割结果的配置?

这是使用 SDK 的调用:

all_results = []
def speech_recognize_continuous_from_file(file_to_transcript):
    """performs continuous speech recognition with input from an audio file"""
    # <SpeechContinuousRecognitionWithFile>
    speech_config = speechsdk.SpeechConfig(subscription=speech_key,
                                           region=service_region,
                                           speech_recognition_language='es-ES')
    audio_config = speechsdk.audio.AudioConfig(filename=file_to_transcribe)

    speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config, audio_config=audio_config)

    done = False

    def stop_cb(evt):
        """callback that stops continuous recognition upon receiving an event `evt`"""
        print('CLOSING on '.format(evt))
        speech_recognizer.stop_continuous_recognition()
        nonlocal done
        done = True

    # Connect callbacks to the events fired by the speech recognizer  

    speech_recognizer.recognized.connect(lambda evt: print('RECOGNIZED: '.format(evt)))
    speech_recognizer.session_started.connect(lambda evt: print('SESSION STARTED: '.format(evt)))
    speech_recognizer.session_stopped.connect(lambda evt: print('SESSION STOPPED '.format(evt)))
    speech_recognizer.canceled.connect(lambda evt: print('CANCELED '.format(evt)))
    # stop continuous recognition on either session stopped or canceled events
    speech_recognizer.session_stopped.connect(stop_cb)
    speech_recognizer.canceled.connect(stop_cb)

    def handle_final_result(evt):
        all_results.append(evt.result.text)

    speech_recognizer.recognized.connect(handle_final_result)
    # Start continuous speech recognition
    speech_recognizer.start_continuous_recognition()



    while not done:
        time.sleep(.5)
    # </SpeechContinuousRecognitionWithFile>

这与 API:

from __future__ import print_function
from typing import List

import logging
import sys
import requests
import time
import swagger_client as cris_client


logging.basicConfig(stream=sys.stdout, level=logging.DEBUG, format="%(message)s")

SUBSCRIPTION_KEY = subscription_key

HOST_NAME = "westeurope.cris.ai"
PORT = 443

NAME = "Simple transcription"
DESCRIPTION = "Simple transcription description"

LOCALE = "es-ES"
RECORDINGS_BLOB_URI = bobl_url
# ADAPTED_ACOUSTIC_ID = None  # guid of a custom acoustic model
# ADAPTED_LANGUAGE_ID = None  # guid of a custom language model


def transcribe():
    logging.info("Starting transcription client...")

    # configure API key authorization: subscription_key
    configuration = cris_client.Configuration()
    configuration.api_key['Ocp-Apim-Subscription-Key'] = SUBSCRIPTION_KEY

    # create the client object and authenticate
    client = cris_client.ApiClient(configuration)

    # create an instance of the transcription api class
    transcription_api = cris_client.CustomSpeechTranscriptionsApi(api_client=client)

    # get all transcriptions for the subscription
    transcriptions: List[cris_client.Transcription] = transcription_api.get_transcriptions()

    logging.info("Deleting all existing completed transcriptions.")

    # delete all pre-existing completed transcriptions
    # if transcriptions are still running or not started, they will not be deleted
    for transcription in transcriptions:
        transcription_api.delete_transcription(transcription.id)

    logging.info("Creating transcriptions.")

    # transcription definition using custom models
#     transcription_definition = cris_client.TranscriptionDefinition(
#         name=NAME, description=DESCRIPTION, locale=LOCALE, recordings_url=RECORDINGS_BLOB_URI,
#         models=[cris_client.ModelIdentity(ADAPTED_ACOUSTIC_ID), cris_client.ModelIdentity(ADAPTED_LANGUAGE_ID)]
#     )

    # comment out the previous statement and uncomment the following to use base models for transcription
    transcription_definition = cris_client.TranscriptionDefinition(
         name=NAME, description=DESCRIPTION, locale=LOCALE, recordings_url=RECORDINGS_BLOB_URI
     )

    data, status, headers = transcription_api.create_transcription_with_http_info(transcription_definition)

    # extract transcription location from the headers
    transcription_location: str = headers["location"]

    # get the transcription Id from the location URI
    created_transcriptions = list()
    created_transcriptions.append(transcription_location.split('/')[-1])

    logging.info("Checking status.")

    completed, running, not_started = 0, 0, 0

    while completed < 1:
        # get all transcriptions for the user
        transcriptions: List[cris_client.Transcription] = transcription_api.get_transcriptions()

        # for each transcription in the list we check the status
        for transcription in transcriptions:
            if transcription.status == "Failed" or transcription.status == "Succeeded":
                # we check to see if it was one of the transcriptions we created from this client
                if transcription.id not in created_transcriptions:
                    continue

                completed += 1

                if transcription.status == "Succeeded":
                    results_uri = transcription.results_urls["channel_0"]
                    results = requests.get(results_uri)
                    logging.info("Transcription succeeded. Results: ")
                    logging.info(results.content.decode("utf-8"))
            elif transcription.status == "Running":
                running += 1
            elif transcription.status == "NotStarted":
                not_started += 1

        logging.info(f"Transcriptions status: completed completed, running running, not_started not started yet")
        # wait for 5 seconds
        time.sleep(5)

    input("Press any key...")


def main():
    transcribe()


if __name__ == "__main__":
    main()


【问题讨论】:

【参考方案1】:

我还想按不同的声音分割结果文本。

收到的成绩单不包含任何说话者的概念。这里只是调用一个端点进行转录,内部没有说话人识别功能。

两件事:

如果您的音频对每个扬声器都有单独的通道,那么您将获得结果(请参阅成绩单 results_urls 通道) 如果没有,您可以使用Speaker Recognition API(文档here)进行此识别,但是: 首先需要一些培训 您的回复中没有偏移量,因此与您的成绩单结果进行映射会很复杂

如您所述,Speech SDK's ConversationTranscriber API(文档here)目前仅限于en-USzh-CN 语言

【讨论】:

谢谢尼古拉斯。 Speaker Recognition APIes-ES 中可用吗?无论如何,这需要额外的努力,遗憾的是 Azure 没有像 AWS 或 Watson 那样默认集成多扬声器。【参考方案2】:

与上一个答案相反,我确实得到了一个结果,即无需任何进一步培训或其他困难即可识别演讲者。我关注了这个 Github 问题:

https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/286

这导致我做出以下改变:

transcription_definition = cris_client.TranscriptionDefinition(
    name=NAME, description=DESCRIPTION, locale=LOCALE, recordings_url=RECORDINGS_BLOB_URI,
    properties="AddDiarization": "True"
)

这给出了预期的结果。

【讨论】:

是否可以在文件中的连续识别语音中添加分离说话者的代码?如果是,如何在代码中添加cris_client,以及如何定义参数:name、description Locale和recordings-url? 我面临同样的问题,但我无法应用您的解决方案。你有没有机会分享一个适合你的音频文件,以便我测试它?

以上是关于Azure Speech-To-Text 多语音识别的主要内容,如果未能解决你的问题,请参考以下文章

如何检测用户正在使用语音转文本?

Microsoft Speech-to-Text:部分成绩单丢失

Azure语音合成再添新声音,“风格迁移”技术为不同音色实现多情感演绎

如何从 Google Apps 脚本授权 Google Speech-to-text?

在 React-Native 上实现 Google Cloud Speech-to-Text

在 Xamarin Forms App 中尝试 Speech-To-Text 后,Text-To-Speech 播放的音量非常低