如何在pyspark中标准化RDD?
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了如何在pyspark中标准化RDD?相关的知识,希望对你有一定的参考价值。
我创建了如下测试和培训数据:
data = sc.textFile(fileName)
training, testing = data.randomSplit([0.6, 0.4], seed=11L)
现在我想标准化每个功能。我找到了StandardScaler,我想使用以下代码来做到这一点:
from pyspark.ml.feature import StandardScaler
scaler = StandardScaler(inputCol="features", outputCol="scaledFeatures", withStd=True, withMean=True)
# Compute summary statistics by fitting the StandardScaler
scalerModel = scaler.fit(training)
# Normalize each Train feature to have unit standard deviation.
scaledTrainData = scalerModel.transform(training)
# Normalize each Test feature to have unit standard deviation.
scaledTestData = scalerModel.transform(testing)
但是我收到以下错误:
AttributeError: 'PipelinedRDD' object has no attribute '_jdf'
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-15-32380b939084> in <module>()
6
7 # Compute summary statistics by fitting the StandardScaler
----> 8 scalerModel = scaler.fit(training)
9
10 # Normalize each Train feature to have unit standard deviation.
/databricks/spark/python/pyspark/ml/pipeline.py in fit(self, dataset, params)
67 return self.copy(params)._fit(dataset)
68 else:
---> 69 return self._fit(dataset)
70 else:
71 raise ValueError("Params must be either a param map or a list/tuple of param maps, "
/databricks/spark/python/pyspark/ml/wrapper.py in _fit(self, dataset)
131
132 def _fit(self, dataset):
--> 133 java_model = self._fit_java(dataset)
134 return self._create_model(java_model)
135
/databricks/spark/python/pyspark/ml/wrapper.py in _fit_java(self, dataset)
128 """
129 self._transfer_params_to_java()
--> 130 return self._java_obj.fit(dataset._jdf)
131
132 def _fit(self, dataset):
AttributeError: 'PipelinedRDD' object has no attribute '_jdf'
有没有其他方法可以做到这一点?
答案
那是因为你从pyspark.ml.feature导入了StandardScaler库,它需要一个数据帧。尝试在代码之前运行“从pyspark.mllib.feature导入StandardScaler,StandardScalerModel”。
以上是关于如何在pyspark中标准化RDD?的主要内容,如果未能解决你的问题,请参考以下文章