尝试从 Jupyter Notebook 使用 Spark 访问 Google Cloud Bigtable 时出现区域错误
Posted
技术标签:
【中文标题】尝试从 Jupyter Notebook 使用 Spark 访问 Google Cloud Bigtable 时出现区域错误【英文标题】:Region Error when trying to access Google Cloud Bigtable with Spark from a Jupyter Notebook 【发布时间】:2017-09-28 13:30:46 【问题描述】:我正在尝试从运行 PySpark 内核的 Jupyter Notebook 中运行对 Google Cloud Bigtable 的并行访问。我以http://ec2-54-66-129-240.ap-southeast-2.compute.amazonaws.com/httrack/docs/cloud.google.com/dataproc/examples/cloud-bigtable-example.html 为例,我使用的是我的特定项目/区域/集群/表名称。通过在 spark 上下文中广播的服务帐户凭据进行身份验证。
jconf = "hbase.client.connection.impl": "com.google.cloud.bigtable.hbase1_1.BigtableConnection",
"google.bigtable.project.id": myProject,
"google.bigtable.zone.name": myZone,
"google.bigtable.cluster.name": myCluster,
"hbase.mapreduce.inputtable": myTable
keyConv = "org.apache.spark.examples.pythonconverters.ImmutableBytesWritableToStringConverter"
valueConv = "org.apache.spark.examples.pythonconverters.HBaseResultToStringConverter"
hbase_rdd = sc.newAPIHadoopRDD(
"org.apache.hadoop.hbase.mapreduce.TableInputFormat",
"org.apache.hadoop.hbase.io.ImmutableBytesWritable",
"org.apache.hadoop.hbase.client.Result",
conf=jconf)
hbase_rdd = hbase_rdd.flatMapValues(lambda v: v.split("\n")).mapValues(json.loads)
print("Row count: %s" % hbase_rdd.count())
我收到以下错误:
Py4JJavaErrorTraceback (most recent call last)
<ipython-input-30-55b05ded0d2b> in <module>()
21 #keyConverter=keyConv,
22 #valueConverter=valueConv,
---> 23 conf=jconf)
24
25 hbase_rdd = hbase_rdd.flatMapValues(lambda v: v.split("\n")).mapValues(json.loads)
/usr/lib/spark/python/pyspark/context.pyc in newAPIHadoopRDD(self, inputFormatClass, keyClass, valueClass, keyConverter, valueConverter, conf, batchSize)
644 jrdd = self._jvm.PythonRDD.newAPIHadoopRDD(self._jsc, inputFormatClass, keyClass,
645 valueClass, keyConverter, valueConverter,
--> 646 jconf, batchSize)
647 return RDD(jrdd, self)
648
/usr/lib/spark/python/lib/py4j-0.10.3-src.zip/py4j/java_gateway.py in __call__(self, *args)
1131 answer = self.gateway_client.send_command(command)
1132 return_value = get_return_value(
-> 1133 answer, self.gateway_client, self.target_id, self.name)
1134
1135 for temp_arg in temp_args:
/usr/lib/spark/python/pyspark/sql/utils.pyc in deco(*a, **kw)
61 def deco(*a, **kw):
62 try:
---> 63 return f(*a, **kw)
64 except py4j.protocol.Py4JJavaError as e:
65 s = e.java_exception.toString()
/usr/lib/spark/python/lib/py4j-0.10.3-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
317 raise Py4JJavaError(
318 "An error occurred while calling 012.\n".
--> 319 format(target_id, ".", name), value)
320 else:
321 raise Py4JError(
Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.newAPIHadoopRDD.
: java.io.IOException: Error sampling rowkeys.
at com.google.cloud.bigtable.hbase.BigtableRegionLocator.getRegions(BigtableRegionLocator.java:79)
at com.google.cloud.bigtable.hbase.BigtableRegionLocator.getAllRegionLocations(BigtableRegionLocator.java:100)
at org.apache.hadoop.hbase.util.RegionSizeCalculator.init(RegionSizeCalculator.java:94)
at org.apache.hadoop.hbase.util.RegionSizeCalculator.<init>(RegionSizeCalculator.java:81)
at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:256)
at org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:237)
at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:121)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1303)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:358)
at org.apache.spark.rdd.RDD.take(RDD.scala:1298)
at org.apache.spark.api.python.SerDeUtil$.pairRDDToPython(SerDeUtil.scala:203)
at org.apache.spark.api.python.PythonRDD$.newAPIHadoopRDD(PythonRDD.scala:582)
at org.apache.spark.api.python.PythonRDD.newAPIHadoopRDD(PythonRDD.scala)
at sun.reflect.GeneratedMethodAccessor30.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.grpc.StatusRuntimeException: UNKNOWN
at io.grpc.Status.asRuntimeException(Status.java:430)
at io.grpc.stub.ClientCalls$BlockingResponseStream.hasNext(ClientCalls.java:369)
at com.google.bigtable.repackaged.com.google.common.collect.ImmutableList.copyOf(ImmutableList.java:268)
at com.google.cloud.bigtable.grpc.BigtableDataGrpcClient.sampleRowKeys(BigtableDataGrpcClient.java:203)
at com.google.cloud.bigtable.hbase.BigtableRegionLocator.getRegions(BigtableRegionLocator.java:73)
... 33 more
Caused by: java.lang.IllegalStateException: Channel is closed
at com.google.cloud.bigtable.grpc.io.ReconnectingChannel$DelayingCall.start(ReconnectingChannel.java:88)
at com.google.cloud.bigtable.grpc.io.ChannelPool$1.checkedStart(ChannelPool.java:97)
at io.grpc.ClientInterceptors$CheckedForwardingClientCall.start(ClientInterceptors.java:164)
at io.grpc.stub.ClientCalls.startCall(ClientCalls.java:193)
at io.grpc.stub.ClientCalls.asyncUnaryRequestCall(ClientCalls.java:173)
at io.grpc.stub.ClientCalls.blockingServerStreamingCall(ClientCalls.java:122)
at com.google.cloud.bigtable.grpc.io.ClientCallService$1.blockingServerStreamingCall(ClientCallService.java:79)
... 35 more
从运行 Jupyter 笔记本的终端,我可以毫无问题地访问 GCloud 上的 Bigtable 实例。此外,google.cloud.bigtable 和 google.cloud.happybase 连接器在同一个 Jupyter 笔记本中工作正常(但它们不处理对 Bigtable 的调用的先验并行化)。
任何线索我可能在这里做错了吗?
仅供参考,我使用的是 Spark 2.0.2、Hadoop 2.7.3、Python 2.7.12、google-cloud-bigtable 0.26.0、com.google.cloud.bigtable:bigtable-hbase-1.1:0.2.2在 Google dataproc 集群上。
非常感谢,
乔治
编辑: 按照 Igor Bernstein 的建议进行编辑后,我收到了一个新错误:
Py4JJavaErrorTraceback (most recent call last)
<ipython-input-5-4f0d8b1fb126> in <module>()
23 #keyConverter=keyConv,
24 #valueConverter=valueConv,
---> 25 conf=jconf)
26
27 hbase_rdd = hbase_rdd.flatMapValues(lambda v: v.split("\n")).mapValues(json.loads)
/usr/lib/spark/python/pyspark/context.py in newAPIHadoopRDD(self, inputFormatClass, keyClass, valueClass, keyConverter, valueConverter, conf, batchSize)
644 jrdd = self._jvm.PythonRDD.newAPIHadoopRDD(self._jsc, inputFormatClass, keyClass,
645 valueClass, keyConverter, valueConverter,
--> 646 jconf, batchSize)
647 return RDD(jrdd, self)
648
/usr/lib/spark/python/lib/py4j-0.10.3-src.zip/py4j/java_gateway.py in __call__(self, *args)
1131 answer = self.gateway_client.send_command(command)
1132 return_value = get_return_value(
-> 1133 answer, self.gateway_client, self.target_id, self.name)
1134
1135 for temp_arg in temp_args:
/usr/lib/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
61 def deco(*a, **kw):
62 try:
---> 63 return f(*a, **kw)
64 except py4j.protocol.Py4JJavaError as e:
65 s = e.java_exception.toString()
/usr/lib/spark/python/lib/py4j-0.10.3-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
317 raise Py4JJavaError(
318 "An error occurred while calling 012.\n".
--> 319 format(target_id, ".", name), value)
320 else:
321 raise Py4JError(
Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.newAPIHadoopRDD.
: java.io.IOException: Cannot create a record reader because of a previous error. Please look at the previous logs lines from the task's full log for more details.
at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:252)
at org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:237)
at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:121)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1303)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:358)
at org.apache.spark.rdd.RDD.take(RDD.scala:1298)
at org.apache.spark.api.python.SerDeUtil$.pairRDDToPython(SerDeUtil.scala:203)
at org.apache.spark.api.python.PythonRDD$.newAPIHadoopRDD(PythonRDD.scala:582)
at org.apache.spark.api.python.PythonRDD.newAPIHadoopRDD(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalStateException: The input format instance has not been properly initialized. Ensure you call initializeTable either in your constructor or initialize method
at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getTable(TableInputFormatBase.java:585)
at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:247)
... 30 more
【问题讨论】:
【参考方案1】:您使用的是什么版本的 bigtable-hbase?可以试试最新版本吗? bigtable-hbase-1.x-hadoop:1.0.0-pre3
?另外请更新您的配置如下:
"hbase.client.connection.impl": "com.google.cloud.bigtable.hbase1_x.BigtableConnection"
删除"google.bigtable.zone.name"
& "google.bigtable.cluster.name"
添加"google.bigtable.instance.id"
:“另外,我很难找到http://ec2-54-66-129-240.ap-southeast-2.compute.amazonaws.com/httrack/docs/cloud.google.com/dataproc/examples/cloud-bigtable-example.html 的原始来源。它是从哪里来的?
【讨论】:
嗨@igor-bernstein,感谢您的回答!我遇到了一个新错误,并且已将信息包含在问题本身中。 至于示例代码,不知道出处(你是不是滚动到网页底部了?)以上是关于尝试从 Jupyter Notebook 使用 Spark 访问 Google Cloud Bigtable 时出现区域错误的主要内容,如果未能解决你的问题,请参考以下文章
从本地 jupyter notebook 连接到 spark 集群
Jupyter + EMR + Spark - 从本地机器上的 Jupyter notebook 连接到 EMR 集群
从 S3 将 CSV 数据加载到 Jupyter Notebook