EOFException Error in 'count_rows'

Hi,
Super new to using Hail. I used the ‘import_plink’ function to read in .bed, .bim, and .fam files to create a MatrixTable. Just trying to understand what might be causing an error when trying to query the number of rows using ‘count_rows’. ‘count_cols’ works fine.

Figured would ask here if someone had experience with something like this.

Thanks!

This is the error:
FatalError: EOFException: Attempted negative seek -2147453821

mtx.describe() output:


Global fields:
None

Column fields:
‘s’: str
‘fam_id’: str
‘pat_id’: str
‘mat_id’: str
‘is_female’: bool
‘is_case’: bool

Row fields:
‘locus’: locus
‘alleles’: array
‘rsid’: str
‘cm_position’: float64

Entry fields:
‘GT’: call

Column key: [‘s’]
Row key: [‘locus’, ‘alleles’]

Hi @sarangi0607, sorry you’re running into this problem. Can you share the log file?

Hi @danking
Thanks for the prompt response. Should have added that I am running Hail on Databricks.

FatalError Traceback (most recent call last)
in
1 #dl_filt = dl_result.filter_rows(dl_result.cm_position==123456, keep=True)
----> 2 dl_filt.count_rows()

</databricks/python/lib/python3.7/site-packages/decorator.py:decorator-gen-1214> in count_rows(self, _localize)

/databricks/python/lib/python3.7/site-packages/hail/typecheck/check.py in wrapper(__original_func, *args, **kwargs)
583 def wrapper(original_func, *args, **kwargs):
584 args
, kwargs
= check_all(__original_func, args, kwargs, checkers, is_method=is_method)
→ 585 return original_func(*args, **kwargs)
586
587 return wrapper

/databricks/python/lib/python3.7/site-packages/hail/matrixtable.py in count_rows(self, _localize)
2383 count_ir = ir.TableCount(ir.MatrixRowsTable(self._mir))
2384 if _localize:
→ 2385 return Env.backend().execute(count_ir)
2386 else:
2387 return construct_expr(ir.LiftMeOut(count_ir), hl.tint64)

/databricks/python/lib/python3.7/site-packages/hail/backend/spark_backend.py in execute(self, ir, timed)
269 def execute(self, ir, timed=False):
270 jir = self._to_java_value_ir(ir)
→ 271 result = json.loads(self._jhc.backend().executeJSON(jir))
272 value = ir.typ._from_json(result[‘value’])
273 timings = result[‘timings’]

/databricks/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py in call(self, *args)
1255 answer = self.gateway_client.send_command(command)
1256 return_value = get_return_value(
→ 1257 answer, self.gateway_client, self.target_id, self.name)
1258
1259 for temp_arg in temp_args:

/databricks/python/lib/python3.7/site-packages/hail/backend/spark_backend.py in deco(*args, **kwargs)
36 raise FatalError(‘%s\n\nJava stack trace:\n%s\n’
37 ‘Hail version: %s\n’
—> 38 ‘Error summary: %s’ % (deepest, full, hail.version, deepest)) from None
39 except pyspark.sql.utils.CapturedException as e:
40 raise FatalError(‘%s\n\nJava stack trace:\n%s\n’

FatalError: EOFException: Attempted negative seek -2147453821

Java stack trace:
java.lang.RuntimeException: error while applying lowering ‘InterpretNonCompilable’
at is.hail.expr.ir.lowering.LoweringPipeline$$anonfun$apply$1.apply(LoweringPipeline.scala:26)
at is.hail.expr.ir.lowering.LoweringPipeline$$anonfun$apply$1.apply(LoweringPipeline.scala:18)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
at is.hail.expr.ir.lowering.LoweringPipeline.apply(LoweringPipeline.scala:18)
at is.hail.expr.ir.CompileAndEvaluate$._apply(CompileAndEvaluate.scala:28)
at is.hail.backend.spark.SparkBackend.is$hail$backend$spark$SparkBackend$$_execute(SparkBackend.scala:317)
at is.hail.backend.spark.SparkBackend$$anonfun$execute$1.apply(SparkBackend.scala:304)
at is.hail.backend.spark.SparkBackend$$anonfun$execute$1.apply(SparkBackend.scala:303)
at is.hail.expr.ir.ExecuteContext$$anonfun$scoped$1.apply(ExecuteContext.scala:19)
at is.hail.expr.ir.ExecuteContext$$anonfun$scoped$1.apply(ExecuteContext.scala:17)
at is.hail.utils.package$.using(package.scala:600)
at is.hail.annotations.Region$.scoped(Region.scala:18)
at is.hail.expr.ir.ExecuteContext$.scoped(ExecuteContext.scala:17)
at is.hail.backend.spark.SparkBackend.withExecuteContext(SparkBackend.scala:229)
at is.hail.backend.spark.SparkBackend.execute(SparkBackend.scala:303)
at is.hail.backend.spark.SparkBackend.executeJSON(SparkBackend.scala:323)
at sun.reflect.GeneratedMethodAccessor358.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380)
at py4j.Gateway.invoke(Gateway.java:295)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:251)
at java.lang.Thread.run(Thread.java:748)

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 38.0 failed 4 times, most recent failure: Lost task 0.3 in stage 38.0 (TID 372, 10.40.249.70, executor 1): java.io.EOFException: Attempted negative seek -2147453821
at com.databricks.s3a.S3AInputStream.seek(S3AInputStream.java:124)
at org.apache.hadoop.fs.FSDataInputStream.seek(FSDataInputStream.java:62)
at is.hail.io.fs.HadoopFS$$anon$2.seek(HadoopFS.scala:51)
at is.hail.io.fs.WrappedSeekableDataInputStream.seek(FS.scala:26)
at is.hail.io.plink.MatrixPLINKReader$$anonfun$8$$anonfun$apply$6$$anonfun$apply$8.apply(LoadPlink.scala:370)
at is.hail.io.plink.MatrixPLINKReader$$anonfun$8$$anonfun$apply$6$$anonfun$apply$8.apply(LoadPlink.scala:363)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:462)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at is.hail.rvd.RVD$$anonfun$count$2.apply(RVD.scala:694)
at is.hail.rvd.RVD$$anonfun$count$2.apply(RVD.scala:692)
at is.hail.sparkextras.ContextRDD$$anonfun$cmapPartitions$1$$anonfun$apply$9.apply(ContextRDD.scala:205)
at is.hail.sparkextras.ContextRDD$$anonfun$cmapPartitions$1$$anonfun$apply$9.apply(ContextRDD.scala:205)
at is.hail.utils.richUtils.RichContextRDD$$anonfun$cleanupRegions$1$$anonfun$1.apply(RichContextRDD.scala:22)
at is.hail.utils.richUtils.RichContextRDD$$anonfun$cleanupRegions$1$$anonfun$1.apply(RichContextRDD.scala:22)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at is.hail.utils.richUtils.RichContextRDD$$anonfun$cleanupRegions$1$$anon$1.hasNext(RichContextRDD.scala:31)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.fold(TraversableOnce.scala:212)
at scala.collection.AbstractIterator.fold(Iterator.scala:1334)
at org.apache.spark.rdd.RDD$$anonfun$fold$1$$anonfun$22.apply(RDD.scala:1148)
at org.apache.spark.rdd.RDD$$anonfun$fold$1$$anonfun$22.apply(RDD.scala:1148)
at org.apache.spark.SparkContext$$anonfun$41.apply(SparkContext.scala:2377)
at org.apache.spark.SparkContext$$anonfun$41.apply(SparkContext.scala:2377)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.doRunTask(Task.scala:140)
at org.apache.spark.scheduler.Task.run(Task.scala:113)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$17.apply(Executor.scala:606)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1541)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:612)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:2362)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:2350)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:2349)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2349)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:1102)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:1102)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1102)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2582)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2529)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2517)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:897)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2280)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2378)
at org.apache.spark.rdd.RDD$$anonfun$fold$1.apply(RDD.scala:1150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:392)
at org.apache.spark.rdd.RDD.fold(RDD.scala:1144)
at is.hail.rvd.RVD.count(RVD.scala:699)
at is.hail.expr.ir.Interpret$$anonfun$run$1.apply$mcJ$sp(Interpret.scala:637)
at is.hail.expr.ir.Interpret$$anonfun$run$1.apply(Interpret.scala:637)
at is.hail.expr.ir.Interpret$$anonfun$run$1.apply(Interpret.scala:637)
at scala.Option.getOrElse(Option.scala:121)
at is.hail.expr.ir.Interpret$.run(Interpret.scala:637)
at is.hail.expr.ir.Interpret$.alreadyLowered(Interpret.scala:53)
at is.hail.expr.ir.InterpretNonCompilable$.interpretAndCoerce$1(InterpretNonCompilable.scala:16)
at is.hail.expr.ir.InterpretNonCompilable$.is$hail$expr$ir$InterpretNonCompilable$$rewrite$1(InterpretNonCompilable.scala:53)
at is.hail.expr.ir.InterpretNonCompilable$.apply(InterpretNonCompilable.scala:58)
at is.hail.expr.ir.lowering.InterpretNonCompilablePass$.transform(LoweringPass.scala:50)
at is.hail.expr.ir.lowering.LoweringPass$$anonfun$apply$3$$anonfun$1.apply(LoweringPass.scala:15)
at is.hail.expr.ir.lowering.LoweringPass$$anonfun$apply$3$$anonfun$1.apply(LoweringPass.scala:15)
at is.hail.utils.ExecutionTimer.time(ExecutionTimer.scala:69)
at is.hail.expr.ir.lowering.LoweringPass$$anonfun$apply$3.apply(LoweringPass.scala:15)
at is.hail.expr.ir.lowering.LoweringPass$$anonfun$apply$3.apply(LoweringPass.scala:13)
at is.hail.utils.ExecutionTimer.time(ExecutionTimer.scala:69)
at is.hail.expr.ir.lowering.LoweringPass$class.apply(LoweringPass.scala:13)
at is.hail.expr.ir.lowering.InterpretNonCompilablePass$.apply(LoweringPass.scala:45)
at is.hail.expr.ir.lowering.LoweringPipeline$$anonfun$apply$1.apply(LoweringPipeline.scala:20)
at is.hail.expr.ir.lowering.LoweringPipeline$$anonfun$apply$1.apply(LoweringPipeline.scala:18)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
at is.hail.expr.ir.lowering.LoweringPipeline.apply(LoweringPipeline.scala:18)
at is.hail.expr.ir.CompileAndEvaluate$._apply(CompileAndEvaluate.scala:28)
at is.hail.backend.spark.SparkBackend.is$hail$backend$spark$SparkBackend$$_execute(SparkBackend.scala:317)
at is.hail.backend.spark.SparkBackend$$anonfun$execute$1.apply(SparkBackend.scala:304)
at is.hail.backend.spark.SparkBackend$$anonfun$execute$1.apply(SparkBackend.scala:303)
at is.hail.expr.ir.ExecuteContext$$anonfun$scoped$1.apply(ExecuteContext.scala:19)
at is.hail.expr.ir.ExecuteContext$$anonfun$scoped$1.apply(ExecuteContext.scala:17)
at is.hail.utils.package$.using(package.scala:600)
at is.hail.annotations.Region$.scoped(Region.scala:18)
at is.hail.expr.ir.ExecuteContext$.scoped(ExecuteContext.scala:17)
at is.hail.backend.spark.SparkBackend.withExecuteContext(SparkBackend.scala:229)
at is.hail.backend.spark.SparkBackend.execute(SparkBackend.scala:303)
at is.hail.backend.spark.SparkBackend.executeJSON(SparkBackend.scala:323)
at sun.reflect.GeneratedMethodAccessor358.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380)
at py4j.Gateway.invoke(Gateway.java:295)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:251)
at java.lang.Thread.run(Thread.java:748)

java.io.EOFException: Attempted negative seek -2147453821
at com.databricks.s3a.S3AInputStream.seek(S3AInputStream.java:124)
at org.apache.hadoop.fs.FSDataInputStream.seek(FSDataInputStream.java:62)
at is.hail.io.fs.HadoopFS$$anon$2.seek(HadoopFS.scala:51)
at is.hail.io.fs.WrappedSeekableDataInputStream.seek(FS.scala:26)
at is.hail.io.plink.MatrixPLINKReader$$anonfun$8$$anonfun$apply$6$$anonfun$apply$8.apply(LoadPlink.scala:370)
at is.hail.io.plink.MatrixPLINKReader$$anonfun$8$$anonfun$apply$6$$anonfun$apply$8.apply(LoadPlink.scala:363)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:462)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at is.hail.rvd.RVD$$anonfun$count$2.apply(RVD.scala:694)
at is.hail.rvd.RVD$$anonfun$count$2.apply(RVD.scala:692)
at is.hail.sparkextras.ContextRDD$$anonfun$cmapPartitions$1$$anonfun$apply$9.apply(ContextRDD.scala:205)
at is.hail.sparkextras.ContextRDD$$anonfun$cmapPartitions$1$$anonfun$apply$9.apply(ContextRDD.scala:205)
at is.hail.utils.richUtils.RichContextRDD$$anonfun$cleanupRegions$1$$anonfun$1.apply(RichContextRDD.scala:22)
at is.hail.utils.richUtils.RichContextRDD$$anonfun$cleanupRegions$1$$anonfun$1.apply(RichContextRDD.scala:22)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at is.hail.utils.richUtils.RichContextRDD$$anonfun$cleanupRegions$1$$anon$1.hasNext(RichContextRDD.scala:31)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.fold(TraversableOnce.scala:212)
at scala.collection.AbstractIterator.fold(Iterator.scala:1334)
at org.apache.spark.rdd.RDD$$anonfun$fold$1$$anonfun$22.apply(RDD.scala:1148)
at org.apache.spark.rdd.RDD$$anonfun$fold$1$$anonfun$22.apply(RDD.scala:1148)
at org.apache.spark.SparkContext$$anonfun$41.apply(SparkContext.scala:2377)
at org.apache.spark.SparkContext$$anonfun$41.apply(SparkContext.scala:2377)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.doRunTask(Task.scala:140)
at org.apache.spark.scheduler.Task.run(Task.scala:113)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$17.apply(Executor.scala:606)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1541)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:612)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

Hail version: 0.2.40-216e3cc7271c
Error summary: EOFException: Attempted negative seek -2147453821

I’m 99% sure this is fixed in a more recent version of Hail. Can you request that Databricks update the Hail version?

Databricks’ ‘Runtime for Genomics’ implements Hail 0.2.4. Not really sure if they can upgrade it but I will check.

Thanks