Hi Hail team!
I ran into an error running hl.summarize_variants(prepared_vcf_ht, show=False)
on the gnomAD v3.1.2 release HT as part of our validity checks.
Traceback (most recent call last): (18 + 4) / 115376]
File "/tmp/44ce72fd6f334daa949881b38b137d58/prepare_vcf_data_release.py", line 994, in <module>
main(args)
File "/tmp/44ce72fd6f334daa949881b38b137d58/prepare_vcf_data_release.py", line 888, in main
var_summary = hl.summarize_variants(prepared_vcf_ht, show=False)
File "<decorator-gen-1761>", line 2, in summarize_variants
File "/opt/conda/default/lib/python3.8/site-packages/hail/typecheck/check.py", line 577, in wrapper
return __original_func(*args_, **kwargs_)
File "/opt/conda/default/lib/python3.8/site-packages/hail/methods/qc.py", line 1143, in summarize_variants
(allele_types, nti, ntv), contigs, allele_counts, n_variants = ht.aggregate(
File "<decorator-gen-1117>", line 2, in aggregate
File "/opt/conda/default/lib/python3.8/site-packages/hail/typecheck/check.py", line 577, in wrapper
return __original_func(*args_, **kwargs_)
File "/opt/conda/default/lib/python3.8/site-packages/hail/table.py", line 1178, in aggregate
return Env.backend().execute(agg_ir)
File "/opt/conda/default/lib/python3.8/site-packages/hail/backend/py4j_backend.py", line 98, in execute
raise e
File "/opt/conda/default/lib/python3.8/site-packages/hail/backend/py4j_backend.py", line 74, in execute
result = json.loads(self._jhc.backend().executeJSON(jir))
File "/usr/lib/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1304, in __call__
File "/opt/conda/default/lib/python3.8/site-packages/hail/backend/py4j_backend.py", line 30, in deco
raise FatalError('%s\n\nJava stack trace:\n%s\n'
hail.utils.java.FatalError: SparkException: Job aborted due to stage failure: Task 5 in stage 0.0 failed 20 times, most recent failure: Lost task 5.19 in stage 0.0 (TID 166) (chr22-w-1.c.broad-mpg-gnomad.internal executor 31): ExecutorLostFailure (executor 31 exited caused by one of the running tasks) Reason: Container from a bad node: container_1633643382464_0006_01_000032 on host: chr22-w-1.c.broad-mpg-gnomad.internal. Exit status: 134. Diagnostics: [2021-10-07 23:24:09.819]Exception from container-launch.
Container id: container_1633643382464_0006_01_000032
Exit code: 134
[2021-10-07 23:24:09.821]Container exited with a non-zero exit code 134. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
/bin/bash: line 1: 22773 Aborted /usr/lib/jvm/adoptopenjdk-8-hotspot-amd64/bin/java -server -Xmx12022m '-Xss4M' -Djava.io.tmpdir=/hadoop/yarn/nm-local-dir/usercache/root/appcache/application_1633643382464_0006/container_1633643382464_0006_01_000032/tmp '-Dspark.ui.port=0' '-Dspark.rpc.message.maxSize=512' '-Dspark.driver.port=44539' -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/userlogs/application_1633643382464_0006/container_1633643382464_0006_01_000032 -XX:OnOutOfMemoryError='kill %p' org.apache.spark.executor.YarnCoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@chr22-m.c.broad-mpg-gnomad.internal:44539 --executor-id 31 --hostname chr22-w-1.c.broad-mpg-gnomad.internal --cores 4 --app-id application_1633643382464_0006 --resourceProfileId 0 --user-class-path file:/hadoop/yarn/nm-local-dir/usercache/root/appcache/application_1633643382464_0006/container_1633643382464_0006_01_000032/__app__.jar --user-class-path file:/hadoop/yarn/nm-local-dir/usercache/root/appcache/application_1633643382464_0006/container_1633643382464_0006_01_000032/hail-all-spark.jar > /var/log/hadoop-yarn/userlogs/application_1633643382464_0006/container_1633643382464_0006_01_000032/stdout 2> /var/log/hadoop-yarn/userlogs/application_1633643382464_0006/container_1633643382464_0006_01_000032/stderr
Last 4096 bytes of stderr :
[2021-10-07 23:24:09.821]Container exited with a non-zero exit code 134. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
/bin/bash: line 1: 22773 Aborted /usr/lib/jvm/adoptopenjdk-8-hotspot-amd64/bin/java -server -Xmx12022m '-Xss4M' -Djava.io.tmpdir=/hadoop/yarn/nm-local-dir/usercache/root/appcache/application_1633643382464_0006/container_1633643382464_0006_01_000032/tmp '-Dspark.ui.port=0' '-Dspark.rpc.message.maxSize=512' '-Dspark.driver.port=44539' -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/userlogs/application_1633643382464_0006/container_1633643382464_0006_01_000032 -XX:OnOutOfMemoryError='kill %p' org.apache.spark.executor.YarnCoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@chr22-m.c.broad-mpg-gnomad.internal:44539 --executor-id 31 --hostname chr22-w-1.c.broad-mpg-gnomad.internal --cores 4 --app-id application_1633643382464_0006 --resourceProfileId 0 --user-class-path file:/hadoop/yarn/nm-local-dir/usercache/root/appcache/application_1633643382464_0006/container_1633643382464_0006_01_000032/__app__.jar --user-class-path file:/hadoop/yarn/nm-local-dir/usercache/root/appcache/application_1633643382464_0006/container_1633643382464_0006_01_000032/hail-all-spark.jar > /var/log/hadoop-yarn/userlogs/application_1633643382464_0006/container_1633643382464_0006_01_000032/stdout 2> /var/log/hadoop-yarn/userlogs/application_1633643382464_0006/container_1633643382464_0006_01_000032/stderr
Last 4096 bytes of stderr :
.
Driver stacktrace:
Java stack trace:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 5 in stage 0.0 failed 20 times, most recent failure: Lost task 5.19 in stage 0.0 (TID 166) (chr22-w-1.c.broad-mpg-gnomad.internal executor 31): ExecutorLostFailure (executor 31 exited caused by one of the running tasks) Reason: Container from a bad node: container_1633643382464_0006_01_000032 on host: chr22-w-1.c.broad-mpg-gnomad.internal. Exit status: 134. Diagnostics: [2021-10-07 23:24:09.819]Exception from container-launch.
Container id: container_1633643382464_0006_01_000032
Exit code: 134
[2021-10-07 23:24:09.821]Container exited with a non-zero exit code 134. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
/bin/bash: line 1: 22773 Aborted /usr/lib/jvm/adoptopenjdk-8-hotspot-amd64/bin/java -server -Xmx12022m '-Xss4M' -Djava.io.tmpdir=/hadoop/yarn/nm-local-dir/usercache/root/appcache/application_1633643382464_0006/container_1633643382464_0006_01_000032/tmp '-Dspark.ui.port=0' '-Dspark.rpc.message.maxSize=512' '-Dspark.driver.port=44539' -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/userlogs/application_1633643382464_0006/container_1633643382464_0006_01_000032 -XX:OnOutOfMemoryError='kill %p' org.apache.spark.executor.YarnCoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@chr22-m.c.broad-mpg-gnomad.internal:44539 --executor-id 31 --hostname chr22-w-1.c.broad-mpg-gnomad.internal --cores 4 --app-id application_1633643382464_0006 --resourceProfileId 0 --user-class-path file:/hadoop/yarn/nm-local-dir/usercache/root/appcache/application_1633643382464_0006/container_1633643382464_0006_01_000032/__app__.jar --user-class-path file:/hadoop/yarn/nm-local-dir/usercache/root/appcache/application_1633643382464_0006/container_1633643382464_0006_01_000032/hail-all-spark.jar > /var/log/hadoop-yarn/userlogs/application_1633643382464_0006/container_1633643382464_0006_01_000032/stdout 2> /var/log/hadoop-yarn/userlogs/application_1633643382464_0006/container_1633643382464_0006_01_000032/stderr
Last 4096 bytes of stderr :
[2021-10-07 23:24:09.821]Container exited with a non-zero exit code 134. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
/bin/bash: line 1: 22773 Aborted /usr/lib/jvm/adoptopenjdk-8-hotspot-amd64/bin/java -server -Xmx12022m '-Xss4M' -Djava.io.tmpdir=/hadoop/yarn/nm-local-dir/usercache/root/appcache/application_1633643382464_0006/container_1633643382464_0006_01_000032/tmp '-Dspark.ui.port=0' '-Dspark.rpc.message.maxSize=512' '-Dspark.driver.port=44539' -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/userlogs/application_1633643382464_0006/container_1633643382464_0006_01_000032 -XX:OnOutOfMemoryError='kill %p' org.apache.spark.executor.YarnCoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@chr22-m.c.broad-mpg-gnomad.internal:44539 --executor-id 31 --hostname chr22-w-1.c.broad-mpg-gnomad.internal --cores 4 --app-id application_1633643382464_0006 --resourceProfileId 0 --user-class-path file:/hadoop/yarn/nm-local-dir/usercache/root/appcache/application_1633643382464_0006/container_1633643382464_0006_01_000032/__app__.jar --user-class-path file:/hadoop/yarn/nm-local-dir/usercache/root/appcache/application_1633643382464_0006/container_1633643382464_0006_01_000032/hail-all-spark.jar > /var/log/hadoop-yarn/userlogs/application_1633643382464_0006/container_1633643382464_0006_01_000032/stdout 2> /var/log/hadoop-yarn/userlogs/application_1633643382464_0006/container_1633643382464_0006_01_000032/stderr
Last 4096 bytes of stderr :
.
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2254)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2203)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2202)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2202)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1078)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1078)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1078)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2441)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2383)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2372)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:868)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2202)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2223)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2242)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2267)
at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1030)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:414)
at org.apache.spark.rdd.RDD.collect(RDD.scala:1029)
at is.hail.backend.spark.SparkBackend.parallelizeAndComputeWithIndex(SparkBackend.scala:286)
at is.hail.backend.BackendUtils.collectDArray(BackendUtils.scala:28)
at __C165Compiled.__m287split_StreamFor(Emit.scala)
at __C165Compiled.__m277begin_group_0(Emit.scala)
at __C165Compiled.__m249split_RunAgg(Emit.scala)
at __C165Compiled.apply(Emit.scala)
at is.hail.expr.ir.CompileAndEvaluate$.$anonfun$_apply$6(CompileAndEvaluate.scala:67)
at scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.java:23)
at is.hail.utils.ExecutionTimer.time(ExecutionTimer.scala:81)
at is.hail.expr.ir.CompileAndEvaluate$._apply(CompileAndEvaluate.scala:67)
at is.hail.expr.ir.CompileAndEvaluate$.evalToIR(CompileAndEvaluate.scala:29)
at is.hail.expr.ir.LowerOrInterpretNonCompilable$.evaluate$1(LowerOrInterpretNonCompilable.scala:29)
at is.hail.expr.ir.LowerOrInterpretNonCompilable$.rewrite$1(LowerOrInterpretNonCompilable.scala:66)
at is.hail.expr.ir.LowerOrInterpretNonCompilable$.rewrite$1(LowerOrInterpretNonCompilable.scala:52)
at is.hail.expr.ir.LowerOrInterpretNonCompilable$.apply(LowerOrInterpretNonCompilable.scala:71)
at is.hail.expr.ir.lowering.LowerOrInterpretNonCompilablePass$.transform(LoweringPass.scala:68)
at is.hail.expr.ir.lowering.LoweringPass.$anonfun$apply$3(LoweringPass.scala:15)
at is.hail.utils.ExecutionTimer.time(ExecutionTimer.scala:81)
at is.hail.expr.ir.lowering.LoweringPass.$anonfun$apply$1(LoweringPass.scala:15)
at is.hail.utils.ExecutionTimer.time(ExecutionTimer.scala:81)
at is.hail.expr.ir.lowering.LoweringPass.apply(LoweringPass.scala:13)
at is.hail.expr.ir.lowering.LoweringPass.apply$(LoweringPass.scala:12)
at is.hail.expr.ir.lowering.LowerOrInterpretNonCompilablePass$.apply(LoweringPass.scala:63)
at is.hail.expr.ir.lowering.LoweringPipeline.$anonfun$apply$1(LoweringPipeline.scala:14)
at is.hail.expr.ir.lowering.LoweringPipeline.$anonfun$apply$1$adapted(LoweringPipeline.scala:12)
at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:38)
at is.hail.expr.ir.lowering.LoweringPipeline.apply(LoweringPipeline.scala:12)
at is.hail.expr.ir.CompileAndEvaluate$._apply(CompileAndEvaluate.scala:46)
at is.hail.backend.spark.SparkBackend._execute(SparkBackend.scala:381)
at is.hail.backend.spark.SparkBackend.$anonfun$execute$1(SparkBackend.scala:365)
at is.hail.expr.ir.ExecuteContext$.$anonfun$scoped$3(ExecuteContext.scala:47)
at is.hail.utils.package$.using(package.scala:638)
at is.hail.expr.ir.ExecuteContext$.$anonfun$scoped$2(ExecuteContext.scala:47)
at is.hail.utils.package$.using(package.scala:638)
at is.hail.annotations.RegionPool$.scoped(RegionPool.scala:17)
at is.hail.expr.ir.ExecuteContext$.scoped(ExecuteContext.scala:46)
at is.hail.backend.spark.SparkBackend.withExecuteContext(SparkBackend.scala:275)
at is.hail.backend.spark.SparkBackend.execute(SparkBackend.scala:362)
at is.hail.backend.spark.SparkBackend.$anonfun$executeJSON$1(SparkBackend.scala:406)
at is.hail.utils.ExecutionTimer$.time(ExecutionTimer.scala:52)
at is.hail.backend.spark.SparkBackend.executeJSON(SparkBackend.scala:404)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Hail version: 0.2.77-684f32d73643
Error summary: SparkException: Job aborted due to stage failure: Task 5 in stage 0.0 failed 20 times, most recent failure: Lost task 5.19 in stage 0.0 (TID 166) (chr22-w-1.c.broad-mpg-gnomad.internal executor 31): ExecutorLostFailure (executor 31 exited caused by one of the running tasks) Reason: Container from a bad node: container_1633643382464_0006_01_000032 on host: chr22-w-1.c.broad-mpg-gnomad.internal. Exit status: 134. Diagnostics: [2021-10-07 23:24:09.819]Exception from container-launch.
Container id: container_1633643382464_0006_01_000032
Exit code: 134
[2021-10-07 23:24:09.821]Container exited with a non-zero exit code 134. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
/bin/bash: line 1: 22773 Aborted /usr/lib/jvm/adoptopenjdk-8-hotspot-amd64/bin/java -server -Xmx12022m '-Xss4M' -Djava.io.tmpdir=/hadoop/yarn/nm-local-dir/usercache/root/appcache/application_1633643382464_0006/container_1633643382464_0006_01_000032/tmp '-Dspark.ui.port=0' '-Dspark.rpc.message.maxSize=512' '-Dspark.driver.port=44539' -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/userlogs/application_1633643382464_0006/container_1633643382464_0006_01_000032 -XX:OnOutOfMemoryError='kill %p' org.apache.spark.executor.YarnCoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@chr22-m.c.broad-mpg-gnomad.internal:44539 --executor-id 31 --hostname chr22-w-1.c.broad-mpg-gnomad.internal --cores 4 --app-id application_1633643382464_0006 --resourceProfileId 0 --user-class-path file:/hadoop/yarn/nm-local-dir/usercache/root/appcache/application_1633643382464_0006/container_1633643382464_0006_01_000032/__app__.jar --user-class-path file:/hadoop/yarn/nm-local-dir/usercache/root/appcache/application_1633643382464_0006/container_1633643382464_0006_01_000032/hail-all-spark.jar > /var/log/hadoop-yarn/userlogs/application_1633643382464_0006/container_1633643382464_0006_01_000032/stdout 2> /var/log/hadoop-yarn/userlogs/application_1633643382464_0006/container_1633643382464_0006_01_000032/stderr
Last 4096 bytes of stderr :
[2021-10-07 23:24:09.821]Container exited with a non-zero exit code 134. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
/bin/bash: line 1: 22773 Aborted /usr/lib/jvm/adoptopenjdk-8-hotspot-amd64/bin/java -server -Xmx12022m '-Xss4M' -Djava.io.tmpdir=/hadoop/yarn/nm-local-dir/usercache/root/appcache/application_1633643382464_0006/container_1633643382464_0006_01_000032/tmp '-Dspark.ui.port=0' '-Dspark.rpc.message.maxSize=512' '-Dspark.driver.port=44539' -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/userlogs/application_1633643382464_0006/container_1633643382464_0006_01_000032 -XX:OnOutOfMemoryError='kill %p' org.apache.spark.executor.YarnCoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@chr22-m.c.broad-mpg-gnomad.internal:44539 --executor-id 31 --hostname chr22-w-1.c.broad-mpg-gnomad.internal --cores 4 --app-id application_1633643382464_0006 --resourceProfileId 0 --user-class-path file:/hadoop/yarn/nm-local-dir/usercache/root/appcache/application_1633643382464_0006/container_1633643382464_0006_01_000032/__app__.jar --user-class-path file:/hadoop/yarn/nm-local-dir/usercache/root/appcache/application_1633643382464_0006/container_1633643382464_0006_01_000032/hail-all-spark.jar > /var/log/hadoop-yarn/userlogs/application_1633643382464_0006/container_1633643382464_0006_01_000032/stdout 2> /var/log/hadoop-yarn/userlogs/application_1633643382464_0006/container_1633643382464_0006_01_000032/stderr
Last 4096 bytes of stderr :
.
Driver stacktrace:
ERROR: (gcloud.dataproc.jobs.submit.pyspark) Job [44ce72fd6f334daa949881b38b137d58] failed with error:
Google Cloud Dataproc Agent reports job failure. If logs are available, they can be found at:
https://console.cloud.google.com/dataproc/jobs/44ce72fd6f334daa949881b38b137d58?project=broad-mpg-gnomad®ion=us-central1
gcloud dataproc jobs wait '44ce72fd6f334daa949881b38b137d58' --region 'us-central1' --project 'broad-mpg-gnomad'
https://console.cloud.google.com/storage/browser/dataproc-faa46220-ec08-4f5b-92bd-9722e1963047-us-central1/google-cloud-dataproc-metainfo/65f118cd-a330-4c0c-bb08-de0d1a27f33f/jobs/44ce72fd6f334daa949881b38b137d58/
gs://dataproc-faa46220-ec08-4f5b-92bd-9722e1963047-us-central1/google-cloud-dataproc-metainfo/65f118cd-a330-4c0c-bb08-de0d1a27f33f/jobs/44ce72fd6f334daa949881b38b137d58/driveroutput
Traceback (most recent call last):
File "/usr/local/bin/hailctl", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.7/site-packages/hailtop/hailctl/__main__.py", line 100, in main
cli.main(args)
File "/usr/local/lib/python3.7/site-packages/hailtop/hailctl/dataproc/cli.py", line 122, in main
jmp[args.module].main(args, pass_through_args)
File "/usr/local/lib/python3.7/site-packages/hailtop/hailctl/dataproc/submit.py", line 78, in main
gcloud.run(cmd)
File "/usr/local/lib/python3.7/site-packages/hailtop/hailctl/dataproc/gcloud.py", line 9, in run
return subprocess.check_call(["gcloud"] + command)
File "/usr/local/Cellar/python/3.7.2_2/Frameworks/Python.framework/Versions/3.7/lib/python3.7/subprocess.py", line 347, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['gcloud', 'dataproc', 'jobs', 'submit', 'pyspark', 'PycharmProjects/gnomad_qc/gnomad_qc/v3/create_release/prepare_vcf_data_release.py', '--cluster=chr22', '--files=', '--py-files=/var/folders/sj/6gr3x1553r5f7tkzsjy99mgs64sz0g/T/pyscripts_eqffej89.zip', '--properties=', '--', '--validity_check']' returned non-zero exit status 1.
I ran this on a very similar HT earlier this year using Hail version 0.2.62-84fa81b9ea3d with the same cluster configuration and previously got no error. Any thoughts?
Log file is too big, but I can email it.
Thank you!
-Julia