我已经构建了Spark-csv并能够使用以下命令从pyspark shell中使用它
bin/spark-shell --packages com.databricks:spark-csv_2.10:1.0.3
错误获取
>>> df_cat.save("k.csv","com.databricks.spark.csv") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/abhishekchoudhary/bigdata/cdh5.2.0/spark-1.3.1/python/pyspark/sql/dataframe.py", line 209, in save self._jdf.save(source, jmode, joptions) File "/Users/abhishekchoudhary/bigdata/cdh5.2.0/spark-1.3.1/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__ File "/Users/abhishekchoudhary/bigdata/cdh5.2.0/spark-1.3.1/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value py4j.protocol.Py4JJavaError
我应该在哪里将jar文件放置在我的spark预制设置中,以便也可以spark-csv直接从python编辑器访问。
spark-csv
在使用spark-csv时,我还必须下载commons-csvjar(不确定它是否仍然有用)。这两个罐子都在spark分布文件夹中。
commons-csv
我下载了以下罐子:
wget http://search.maven.org/remotecontent?filepath=org/apache/commons/commons-csv/1.1/commons-csv-1.1.jar -O commons-csv-1.1.jar<br/>
wget http://search.maven.org/remotecontent?filepath=com/databricks/spark-csv_2.10/1.0.0/spark-csv_2.10-1.0.0.jar -O spark-csv_2.10-1.0.0.jar
然后使用以下参数启动python spark shell:
./bin/pyspark --jars "spark-csv_2.10-1.0.0.jar,commons-csv-1.1.jar"
并从csv文件读取spark数据帧:
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc) df = sqlContext.load(source=”com.databricks.spark.csv”, path = “/path/to/you/file.csv”) df.show()