Could you share the hail log from one of these failing runs? I’d like to see what Spark thinks its copying and where it is putting the jars.
I don’t see how to attach anything other than an image here, so I dumped stdout and stderr from one of the task attempt logs on s3.
We were able to get past this by manually distributing the jar to all of the cluster nodes and adding the absolute path to the jar to the classpath variables in spark-defaults.
That’s pretty horrible. Maybe we should talk to the Spark people about this stuff?
addJar not being exposed in Python, not doing the right thing for s3 paths, etc.
Is there a way to compile our own files with
gradlew so we can get the latest Hail 0.2 version using Spark 2.3.0?
I used your .jar and .zip files (version
ae9e34fb3cbf) and they do work on emr-5.13.0 with Spark 2.3.0; we just want to get Hail with the latest updates.
It’s totally possible to compile your own! I haven’t done it in a while (since making that .jar and .zip) so I could be wrong about specifics, but all you need to do is pass versions for Spark, Breeze, and py4j:
./gradlew -Dspark.version=2.3.0 -Dbreeze.version=0.13.2 -Dpy4j.version=0.10.6 shadowJar archiveZip
I just looked up the breeze / py4j versions for spark 2.3.0 so these should be correct.
Is there a recommend Hail 0.2 commit version?
Dataproc error: java.io.IOException: Failed to create local dir
Potential compatibility issue with Spark 2.2.0?
also note you’ll need to compile on the same OS that the EMR VMs are using
Thanks Tim. I’ll give it a shot
It worked, np. Thanks Tim!