我正在尝试使用sparkjava.com框架为我的Apache Spark作业构建Web API。我的代码是:
@Override public void init() { get("/hello", (req, res) -> { String sourcePath = "hdfs://spark:54310/input/*"; SparkConf conf = new SparkConf().setAppName("LineCount"); conf.setJars(new String[] { "/home/sam/resin-4.0.42/webapps/test.war" }); File configFile = new File("config.properties"); String sparkURI = "spark://hamrah:7077"; conf.setMaster(sparkURI); conf.set("spark.driver.allowMultipleContexts", "true"); JavaSparkContext sc = new JavaSparkContext(conf); @SuppressWarnings("resource") JavaRDD<String> log = sc.textFile(sourcePath); JavaRDD<String> lines = log.filter(x -> { return true; }); return lines.count(); }); }
如果删除lambda表达式或将其放在简单的jar而不是Web服务(以某种方式称为servlet)中,它将运行无任何错误。但是在servlet中使用lambda表达式将导致以下异常:
15/01/28 10:36:33 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, hamrah): java.lang.ClassCastException: cannot assign instance of java.lang.invoke.SerializedLambda to field org.apache.spark.api.java.JavaRDD$$anonfun$filter$1.f$1 of type org.apache.spark.api.java.function.Function in instance of org.apache.spark.api.java.JavaRDD$$anonfun$filter$1 at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2089) at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1261) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1999) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371) at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62) at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:57) at org.apache.spark.scheduler.Task.run(Task.scala:56) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)
PS:我尝试过将jersey和javaspark与码头,tomcat和resin结合使用,所有这些使我得出了相同的结果。
您在这里遇到的是一个后续错误,该错误掩盖了原始错误。
序列化lambda实例时,它们用于writeReplace从作为SerializedLambda 实例的持久形式中分解其JRE特定实现。当SerializedLambda实例已恢复,它的readResolve方法将被调用来重建适当的拉姆达实例。如文档所述,它将通过调用定义原始lambda的类的特殊方法来做到这一点。重要的一点是,需要原始类,而这正是您的情况所缺少的。
writeReplace
SerializedLambda
readResolve
但是,……有……的特殊行为ObjectInputStream。当遇到异常时,它不会立即纾困。它将记录异常并继续该过程,标记当前正在读取的所有对象,从而也将错误对象也视为错误对象。仅在该过程结束时,它才会引发遇到的原始异常。使它如此奇怪的是,它还将继续尝试设置这些对象的字段。但是,当您查看方法ObjectInputStream.readOrdinaryObject行1806时:
ObjectInputStream
ObjectInputStream.readOrdinaryObject
… if (obj != null && handles.lookupException(passHandle) == null && desc.hasReadResolveMethod()) { Object rep = desc.invokeReadResolve(obj); if (unshared && rep.getClass().isArray()) { rep = cloneArray(rep); } if (rep != obj) { handles.setObject(passHandle, obj = rep); } } return obj; }
您会看到报告异常readResolve时它不会调用该方法。但是,如果没有发生替换,那么继续尝试设置引荐来源网址的字段值不是一个好主意,但这正是此处发生的情况,因此生成了一个。lookupException``null``ClassCastException
lookupException``null``ClassCastException
您可以轻松重现该问题:
public class Holder implements Serializable { Runnable r; } public class Defining { public static Holder get() { final Holder holder = new Holder(); holder.r=(Runnable&Serializable)()->{}; return holder; } } public class Writing { static final File f=new File(System.getProperty("java.io.tmpdir"), "x.ser"); public static void main(String... arg) throws IOException { try(FileOutputStream os=new FileOutputStream(f); ObjectOutputStream oos=new ObjectOutputStream(os)) { oos.writeObject(Defining.get()); } System.out.println("written to "+f); } } public class Reading { static final File f=new File(System.getProperty("java.io.tmpdir"), "x.ser"); public static void main(String... arg) throws IOException, ClassNotFoundException { try(FileInputStream is=new FileInputStream(f); ObjectInputStream ois=new ObjectInputStream(is)) { Holder h=(Holder)ois.readObject(); System.out.println(h.r); h.r.run(); } System.out.println("read from "+f); } }
编译这四个类并运行Writing。然后删除类文件Defining.class并运行Reading。然后你会得到一个
Writing
Defining.class
Reading
Exception in thread "main" java.lang.ClassCastException: cannot assign instance of java.lang.invoke.SerializedLambda to field test.Holder.r of type java.lang.Runnable in instance of test.Holder at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2089) at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1261)
(测试1.8.0_20)
最重要的是,一旦了解了序列化问题,您可能会忘记此序列化问题,解决问题所要做的就是确保定义lambda表达式的类在lambda所在的运行时中也可用反序列化。
直接从IDE运行Spark Job的示例(默认情况下,spark-submit分发jar):
SparkConf sconf = new SparkConf() .set("spark.eventLog.dir", "hdfs://nn:8020/user/spark/applicationHistory") .set("spark.eventLog.enabled", "true") .setJars(new String[]{"/path/to/jar/with/your/class.jar"}) .setMaster("spark://spark.standalone.uri:7077");