尝试运行 Google Speech Recognition java 示例时出现“需要 query.json 的路径”错误
Posted
技术标签:
【中文标题】尝试运行 Google Speech Recognition java 示例时出现“需要 query.json 的路径”错误【英文标题】:Getting "need path to queries.json" error when trying to run Google Speech Recognition java example 【发布时间】:2019-04-26 18:32:01 【问题描述】:我正在尝试在此处运行 java 流式语音识别示例:https://cloud.google.com/speech-to-text/docs/streaming-recognize#speech-streaming-mic-recognize-java
我在Eclipse中新建了一个gradle项目,将compile 'com.google.cloud:google-cloud-speech:1.1.0'
和compile 'com.google.cloud:google-cloud-bigquery:1.70.0'
添加到依赖项中,然后将示例代码从链接复制到主类。我可以看到的示例脚本中没有使用第二个依赖项中的任何内容,但我需要它,否则我会收到如下错误:Error: Could not find or load main class com.google.cloud.bigquery.benchmark.Benchmark
在添加了两个依赖项的情况下运行时,我立即在标题 (need path to queries.json
) 中收到错误,并且应用程序退出。什么是 queries.json 文件,如何为应用程序提供路径以运行示例项目? google API 在我的系统上设置了适当的环境变量,并且 API 调用被配置为允许来自我正在使用的机器的 IP。
这是整个类脚本(项目中唯一的脚本):
import com.google.api.gax.rpc.ClientStream;
import com.google.api.gax.rpc.ResponseObserver;
import com.google.api.gax.rpc.StreamController;
import com.google.cloud.speech.v1.RecognitionAudio;
import com.google.cloud.speech.v1.RecognitionConfig;
import com.google.cloud.speech.v1.RecognitionConfig.AudioEncoding;
import com.google.cloud.speech.v1.RecognizeResponse;
import com.google.cloud.speech.v1.SpeechClient;
import com.google.cloud.speech.v1.SpeechRecognitionAlternative;
import com.google.cloud.speech.v1.SpeechRecognitionResult;
import com.google.cloud.speech.v1.StreamingRecognitionConfig;
import com.google.cloud.speech.v1.StreamingRecognitionResult;
import com.google.cloud.speech.v1.StreamingRecognizeRequest;
import com.google.cloud.speech.v1.StreamingRecognizeResponse;
import com.google.protobuf.ByteString;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.List;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.Audiosystem;
import javax.sound.sampled.DataLine;
import javax.sound.sampled.DataLine.Info;
import javax.sound.sampled.TargetDataLine;
public class GoogleSpeechRecognition
/** Performs microphone streaming speech recognition with a duration of 1 minute. */
public static void streamingMicRecognize() throws Exception
ResponseObserver<StreamingRecognizeResponse> responseObserver = null;
try (SpeechClient client = SpeechClient.create())
responseObserver =
new ResponseObserver<StreamingRecognizeResponse>()
ArrayList<StreamingRecognizeResponse> responses = new ArrayList<>();
public void onStart(StreamController controller)
public void onResponse(StreamingRecognizeResponse response)
responses.add(response);
public void onComplete()
for (StreamingRecognizeResponse response : responses)
StreamingRecognitionResult result = response.getResultsList().get(0);
SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
System.out.printf("Transcript : %s\n", alternative.getTranscript());
public void onError(Throwable t)
System.out.println(t);
;
ClientStream<StreamingRecognizeRequest> clientStream =
client.streamingRecognizeCallable().splitCall(responseObserver);
RecognitionConfig recognitionConfig =
RecognitionConfig.newBuilder()
.setEncoding(RecognitionConfig.AudioEncoding.LINEAR16)
.setLanguageCode("en-US")
.setSampleRateHertz(16000)
.build();
StreamingRecognitionConfig streamingRecognitionConfig =
StreamingRecognitionConfig.newBuilder().setConfig(recognitionConfig).build();
StreamingRecognizeRequest request =
StreamingRecognizeRequest.newBuilder()
.setStreamingConfig(streamingRecognitionConfig)
.build(); // The first request in a streaming call has to be a config
clientStream.send(request);
// SampleRate:16000Hz, SampleSizeInBits: 16, Number of channels: 1, Signed: true,
// bigEndian: false
AudioFormat audioFormat = new AudioFormat(16000, 16, 1, true, false);
DataLine.Info targetInfo =
new Info(
TargetDataLine.class,
audioFormat); // Set the system information to read from the microphone audio stream
if (!AudioSystem.isLineSupported(targetInfo))
System.out.println("Microphone not supported");
System.exit(0);
// Target data line captures the audio stream the microphone produces.
TargetDataLine targetDataLine = (TargetDataLine) AudioSystem.getLine(targetInfo);
targetDataLine.open(audioFormat);
targetDataLine.start();
System.out.println("Start speaking");
long startTime = System.currentTimeMillis();
// Audio Input Stream
AudioInputStream audio = new AudioInputStream(targetDataLine);
while (true)
long estimatedTime = System.currentTimeMillis() - startTime;
byte[] data = new byte[6400];
audio.read(data);
if (estimatedTime > 60000) // 60 seconds
System.out.println("Stop speaking.");
targetDataLine.stop();
targetDataLine.close();
break;
request =
StreamingRecognizeRequest.newBuilder()
.setAudioContent(ByteString.copyFrom(data))
.build();
clientStream.send(request);
catch (Exception e)
System.out.println(e);
responseObserver.onComplete();
【问题讨论】:
看起来像 ***.com/q/53026314/1031958 的副本,这是 6 个月前被问到的,Google 工程师 @Graham Polley 没有回应。我建议尝试通过他们的其他支持渠道询问他们 【参考方案1】:好的,所以对我来说,这很简单,而且是我的错。默认运行配置被错误地设置为其中一个 google 类,而不是您调用的包含示例代码的测试类。这就是导致 bigquery 错误和 queries.json 错误的原因。只需使用示例代码将主类更正为您的测试类,它就可以工作。此外,您根本不需要在您的 gradle 依赖项中包含 compile 'com.google.cloud:google-cloud-bigquery:1.70.0'
,抱怨需要它的错误是由运行配置中的主类设置不正确引起的。
Example output
【讨论】:
以上是关于尝试运行 Google Speech Recognition java 示例时出现“需要 query.json 的路径”错误的主要内容,如果未能解决你的问题,请参考以下文章
403(禁止),Google Speech API 上的无效键错误
Google Cloud Speech API 中转录的文件大小
Google Cloud - Speech to Text 用户配额
使用 cURL 或 Python 让 Google Cloud Text to Speech 工作