我想转换火花数据框架以使用以下代码添加:
from pyspark.mllib.clustering import KMeans spark_df = sqlContext.createDataFrame(pandas_df) rdd = spark_df.map(lambda data: Vectors.dense([float(c) for c in data])) model = KMeans.train(rdd, 2, maxIterations=10, runs=30, initializationMode="random")
详细的错误消息是:
--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-11-a19a1763d3ac> in <module>() 1 from pyspark.mllib.clustering import KMeans 2 spark_df = sqlContext.createDataFrame(pandas_df) ----> 3 rdd = spark_df.map(lambda data: Vectors.dense([float(c) for c in data])) 4 model = KMeans.train(rdd, 2, maxIterations=10, runs=30, initializationMode="random") /home/edamame/spark/spark-2.0.0-bin-hadoop2.6/python/pyspark/sql/dataframe.pyc in __getattr__(self, name) 842 if name not in self.columns: 843 raise AttributeError( --> 844 "'%s' object has no attribute '%s'" % (self.__class__.__name__, name)) 845 jc = self._jdf.apply(name) 846 return Column(jc) AttributeError: 'DataFrame' object has no attribute 'map'
有人知道我在这里做错了吗?谢谢!
您无法map使用数据框,但可以将数据框转换为RDD并通过映射将其映射spark_df.rdd.map()。在Spark 2.0之前,spark_df.map别名为spark_df.rdd.map()。使用Spark 2.0,您必须先明确调用.rdd。
map
spark_df.rdd.map()
spark_df.map
.rdd