PCA job aborted from SparkException

Hello! I am following up on a question from a 3 year old thread (Pca failed due to not enough executor memory) as I am having the same issues.

I am running a sufficiently powered Spark cluster in Terra and attempting to run GWAS using Hail. The following command that I am running on only CHR22 data (Samples: 37698 Variants: 137113)

eigenvalues, pcs, _ = hl.hwe_normalized_pca(mt.GT)

Is terminating with the following error:

Hail version: 0.2.39-ef87446bd1c7
Error summary: SparkException: Job aborted due to stage failure: Task 13 in stage 32.0 failed 4 times, most recent failure: Lost task 13.3 in stage 32.0 (TID 1081, saturn-62c1ecef-284b-4593-96ed-63bb90133cc2-w-4.c.ariel-research-and-development.internal, executor 70): ExecutorLostFailure (executor 70 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 6.0 GB of 6 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714.
Driver stacktrace:

The suggestion from the old Hail thread linked above was to up the Spark configuration to spark.executor.cores=4(default value was 2).

I was able to do so by passing the Spark Configuration via my Hail init() run (I have not figured out another way to do so with Terra, though I was messing around with feeding init() a pre-initialized Spark Context)

hl.init(spark_conf={ā€˜spark.executor.coresā€™:'4})

This seemed to work, after confirming with hl.spark_context() and checking the configuration.

I reran, and got the same Hail error although it got a bit further:

Hail version: 0.2.39-ef87446bd1c7
Error summary: SparkException: Job aborted due to stage failure: Task 116 in stage 9.0 failed 4 times, most recent failure: Lost task 116.3 in stage 9.0 (TID 1273, saturn-e7ed1e4b-3ad5-4fac-802c-8d7818997c01-w-21.c.ariel-research-and-development.internal, executor 174): ExecutorLostFailure (executor 174 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 6.5 GB of 6 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714.
Driver stacktrace:

Next I tried to additionally increase the executor.memory:

hl.init(spark_conf={ā€˜spark.executor.coresā€™:ā€˜4ā€™,
ā€˜spark.executor.memoryā€™:ā€˜8gā€™})

and I got:

Hail version: 0.2.39-ef87446bd1c7
Error summary: SparkException: Job aborted due to stage failure: Task 80 in stage 9.0 failed 4 times, most recent failure: Lost task 80.3 in stage 9.0 (TID 1078, saturn-e7ed1e4b-3ad5-4fac-802c-8d7818997c01-w-8.c.ariel-research-and-development.internal, executor 51): ExecutorLostFailure (executor 51 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 9.0 GB of 9 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714.
Driver stacktrace:

Not sure how best to proceed as I donā€™t see a configuration called ā€œmemoryOverheadā€ when I context.getConf().getAll(). Worried this issue will be exacerbated when I run over all contigs as well.

Any thoughts on how to fix this issue? I am a complete Spark newbie, so I apologize for any naivete.

Hi @acererak, Iā€™m sorry youā€™re having trouble with Hailā€™s PCA! Unfortunately, PCA is currently a somewhat memory intensive algorithm. Weā€™re hard at work on better approaches but theyā€™re not ready yet.

As to addressing your current problem, when running memory intensive routines like PCA, folks often use n1-highmem worker nodes instead of the normal n1-standard worker nodes. Iā€™m not sure how to change this in Terra, but there should be an option somewhere for ā€œpreemptible worker type.ā€

Unrelatedly, I think people use significantly fewer than 137113 variants for PCA. IIRC, people usually use ~10k SNPs for the entire genome.

Hi @danking thanks for the suggestion on more preemptible nodes.

I just tried rerunning. My specs are as follows:

I am using 75 worker nodes, 65 of which are preemptible. Each node has 26GB memory. I am also setting the # of executor cores to 4 in my hail init

hl.init(spark_conf=dict({ā€˜spark.executor.coresā€™: ā€˜4ā€™}))

The number of variants is high as I am doing an analysis on an imputed dataset. We chose to use Hail specifically for itā€™s ability to work with Spark and handle large datasets, so Iā€™m hoping I can get this to work!

The command I am using is: eigenvalues, pcs, _ = hl.hwe_normalized_pca(mt.GT)

2020-06-04 16:37:47 Hail: INFO: hwe_normalized_pca: running PCA using 139837 variants.

---------------------------------------------------------------------------
FatalError                                Traceback (most recent call last)
<ipython-input-21-9745ca02f93d> in <module>
      1 # Compute PCA
----> 2 eigenvalues, pcs, _ = hl.hwe_normalized_pca(mt.GT)

<decorator-gen-1549> in hwe_normalized_pca(call_expr, k, compute_loadings)

/usr/local/lib/python3.7/dist-packages/hail/typecheck/check.py in wrapper(__original_func, *args, **kwargs)
    583     def wrapper(__original_func, *args, **kwargs):
    584         args_, kwargs_ = check_all(__original_func, args, kwargs, checkers, is_method=is_method)
--> 585         return __original_func(*args_, **kwargs_)
    586 
    587     return wrapper

/usr/local/lib/python3.7/dist-packages/hail/methods/statgen.py in hwe_normalized_pca(call_expr, k, compute_loadings)
   1443     return pca(normalized_gt,
   1444                k,
-> 1445                compute_loadings)
   1446 
   1447 

<decorator-gen-1551> in pca(entry_expr, k, compute_loadings)

/usr/local/lib/python3.7/dist-packages/hail/typecheck/check.py in wrapper(__original_func, *args, **kwargs)
    583     def wrapper(__original_func, *args, **kwargs):
    584         args_, kwargs_ = check_all(__original_func, args, kwargs, checkers, is_method=is_method)
--> 585         return __original_func(*args_, **kwargs_)
    586 
    587     return wrapper

/usr/local/lib/python3.7/dist-packages/hail/methods/statgen.py in pca(entry_expr, k, compute_loadings)
   1545         'entryField': field,
   1546         'k': k,
-> 1547         'computeLoadings': compute_loadings
   1548     })).persist())
   1549 

<decorator-gen-1099> in persist(self, storage_level)

/usr/local/lib/python3.7/dist-packages/hail/typecheck/check.py in wrapper(__original_func, *args, **kwargs)
    583     def wrapper(__original_func, *args, **kwargs):
    584         args_, kwargs_ = check_all(__original_func, args, kwargs, checkers, is_method=is_method)
--> 585         return __original_func(*args_, **kwargs_)
    586 
    587     return wrapper

/usr/local/lib/python3.7/dist-packages/hail/table.py in persist(self, storage_level)
   1778             Persisted table.
   1779         """
-> 1780         return Env.backend().persist_table(self, storage_level)
   1781 
   1782     def unpersist(self) -> 'Table':

/usr/local/lib/python3.7/dist-packages/hail/backend/spark_backend.py in persist_table(self, t, storage_level)
    288 
    289     def persist_table(self, t, storage_level):
--> 290         return Table._from_java(self._jbackend.pyPersistTable(storage_level, self._to_java_table_ir(t._tir)))
    291 
    292     def unpersist_table(self, t):

/usr/local/lib/python3.7/dist-packages/py4j/java_gateway.py in __call__(self, *args)
   1255         answer = self.gateway_client.send_command(command)
   1256         return_value = get_return_value(
-> 1257             answer, self.gateway_client, self.target_id, self.name)
   1258 
   1259         for temp_arg in temp_args:

/usr/local/lib/python3.7/dist-packages/hail/backend/spark_backend.py in deco(*args, **kwargs)
     36             raise FatalError('%s\n\nJava stack trace:\n%s\n'
     37                              'Hail version: %s\n'
---> 38                              'Error summary: %s' % (deepest, full, hail.__version__, deepest)) from None
     39         except pyspark.sql.utils.CapturedException as e:
     40             raise FatalError('%s\n\nJava stack trace:\n%s\n'

FatalError: SparkException: Job aborted due to stage failure: Task 278 in stage 9.0 failed 4 times, most recent failure: Lost task 278.3 in stage 9.0 (TID 1406, saturn-9dcb4649-341e-4629-af9f-236daaf0efdf-w-59.c.ariel-research-and-development.internal, executor 226): ExecutorLostFailure (executor 226 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits.  10.3 GB of 10 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714.
Driver stacktrace:

Java stack trace:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 278 in stage 9.0 failed 4 times, most recent failure: Lost task 278.3 in stage 9.0 (TID 1406, saturn-9dcb4649-341e-4629-af9f-236daaf0efdf-w-59.c.ariel-research-and-development.internal, executor 226): ExecutorLostFailure (executor 226 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits.  10.3 GB of 10 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714.
Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1890)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1878)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1877)
	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1877)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
	at scala.Option.foreach(Option.scala:257)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2111)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2060)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2049)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
	at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:945)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
	at org.apache.spark.rdd.RDD.collect(RDD.scala:944)
	at is.hail.sparkextras.ContextRDD.collect(ContextRDD.scala:163)
	at is.hail.rvd.RVD.countPerPartition(RVD.scala:707)
	at is.hail.expr.ir.MatrixValue.toRowMatrix(MatrixValue.scala:241)
	at is.hail.methods.PCA.execute(PCA.scala:33)
	at is.hail.expr.ir.functions.WrappedMatrixToTableFunction.execute(RelationalFunctions.scala:49)
	at is.hail.expr.ir.TableToTableApply.execute(TableIR.scala:2169)
	at is.hail.expr.ir.Interpret$.apply(Interpret.scala:23)
	at is.hail.backend.spark.SparkBackend$$anonfun$pyPersistTable$1.apply(SparkBackend.scala:402)
	at is.hail.backend.spark.SparkBackend$$anonfun$pyPersistTable$1.apply(SparkBackend.scala:401)
	at is.hail.expr.ir.ExecuteContext$$anonfun$scoped$1.apply(ExecuteContext.scala:19)
	at is.hail.expr.ir.ExecuteContext$$anonfun$scoped$1.apply(ExecuteContext.scala:17)
	at is.hail.utils.package$.using(package.scala:600)
	at is.hail.annotations.Region$.scoped(Region.scala:18)
	at is.hail.expr.ir.ExecuteContext$.scoped(ExecuteContext.scala:17)
	at is.hail.backend.spark.SparkBackend.withExecuteContext(SparkBackend.scala:229)
	at is.hail.backend.spark.SparkBackend.pyPersistTable(SparkBackend.scala:401)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
	at py4j.Gateway.invoke(Gateway.java:282)
	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
	at py4j.commands.CallCommand.execute(CallCommand.java:79)
	at py4j.GatewayConnection.run(GatewayConnection.java:238)
	at java.lang.Thread.run(Thread.java:748)



Hail version: 0.2.39-ef87446bd1c7
Error summary: SparkException: Job aborted due to stage failure: Task 278 in stage 9.0 failed 4 times, most recent failure: Lost task 278.3 in stage 9.0 (TID 1406, saturn-9dcb4649-341e-4629-af9f-236daaf0efdf-w-59.c.ariel-research-and-development.internal, executor 226): ExecutorLostFailure (executor 226 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits.  10.3 GB of 10 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead or disabling yarn.nodemanager.vmem-check-enabled because of YARN-4714.
Driver stacktrace:


Full error message below:

For PCA, the imputed variants are telling you nothing that the genotyped variants donā€™t tell you. I strongly suggest you consider ld pruning before you perform PCA (or not using any imputed variants). We havenā€™t designed PCA in a particularly scalable way because of this and fixing it, although on our roadmap, is not a high priority.

Your machines should have 26 GB and each executor should be 4 cores. I do not know why YARN is limiting you to 10 GB. You might try setting spark.executor.memory to 26g.

Great suggestion @danking ! I will use the non-imputed data for PCA and then use those components as covariates for the imputed data GWAS.

Is there a way to use import_bed() in similar manner to import_bgen() where ā€œpathā€ is a list of str? My imputed data is in BGEN/SAMPLE format but my regular data is BED/FAM.

import_bed does not support a list of paths. Youā€™ll have to use Table.union, so:

import hail as hl
paths = [...]
hl.import_bed(paths[0]).union(
    *[hl.import_bed(path) for path in paths[1:]])
1 Like

This is throwing a MalformedInputException for me.

Even if I run chr1 = hl.import_bed(gen_folder+ā€™/ukb_cal_chr1_v2.bedā€™) I get:

FatalError: MalformedInputException: Input length = 1

Java stack trace:
java.nio.charset.MalformedInputException: Input length = 1
at java.nio.charset.CoderResult.throwException(CoderResult.java:281)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:339)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at java.io.BufferedReader.fill(BufferedReader.java:161)
at java.io.BufferedReader.readLine(BufferedReader.java:324)
at java.io.BufferedReader.readLine(BufferedReader.java:389)
at scala.io.BufferedSource$BufferedLineIterator.hasNext(BufferedSource.scala:72)
at scala.collection.Iterator$$anon$21.hasNext(Iterator.scala:834)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:462)
at scala.collection.Iterator$class.isEmpty(Iterator.scala:331)
at scala.collection.AbstractIterator.isEmpty(Iterator.scala:1334)
at is.hail.expr.ir.TextTableReader$$anonfun$14.apply(TextTableReader.scala:253)
at is.hail.expr.ir.TextTableReader$$anonfun$14.apply(TextTableReader.scala:250)
at is.hail.io.fs.FS$$anonfun$readLines$1.apply(FS.scala:167)
at is.hail.io.fs.FS$$anonfun$readLines$1.apply(FS.scala:158)
at is.hail.utils.package$.using(package.scala:600)
at is.hail.io.fs.FS$class.readLines(FS.scala:157)
at is.hail.io.fs.HadoopFS.readLines(HadoopFS.scala:57)
at is.hail.expr.ir.TextTableReader$.readMetadata1(TextTableReader.scala:250)
at is.hail.expr.ir.TextTableReader$$anonfun$readMetadata$1.apply(TextTableReader.scala:225)
at is.hail.expr.ir.TextTableReader$$anonfun$readMetadata$1.apply(TextTableReader.scala:225)
at is.hail.HailContext$.maybeGZipAsBGZip(HailContext.scala:411)
at is.hail.expr.ir.TextTableReader$.readMetadata(TextTableReader.scala:224)
at is.hail.expr.ir.TextTableReader$.apply(TextTableReader.scala:343)
at is.hail.expr.ir.TextTableReader$.fromJValue(TextTableReader.scala:350)
at is.hail.expr.ir.TableReader$.fromJValue(TableIR.scala:101)
at is.hail.expr.ir.IRParser$.table_ir_1(Parser.scala:1238)
at is.hail.expr.ir.IRParser$.table_ir(Parser.scala:1214)
at is.hail.expr.ir.IRParser$$anonfun$parse_table_ir$1.apply(Parser.scala:1682)
at is.hail.expr.ir.IRParser$$anonfun$parse_table_ir$1.apply(Parser.scala:1682)
at is.hail.expr.ir.IRParser$.parse(Parser.scala:1671)
at is.hail.expr.ir.IRParser$.parse_table_ir(Parser.scala:1682)
at is.hail.backend.spark.SparkBackend$$anonfun$parse_table_ir$1.apply(SparkBackend.scala:512)
at is.hail.backend.spark.SparkBackend$$anonfun$parse_table_ir$1.apply(SparkBackend.scala:511)
at is.hail.expr.ir.ExecuteContext$$anonfun$scoped$1.apply(ExecuteContext.scala:19)
at is.hail.expr.ir.ExecuteContext$$anonfun$scoped$1.apply(ExecuteContext.scala:17)
at is.hail.utils.package$.using(package.scala:600)
at is.hail.annotations.Region$.scoped(Region.scala:18)
at is.hail.expr.ir.ExecuteContext$.scoped(ExecuteContext.scala:17)
at is.hail.backend.spark.SparkBackend.withExecuteContext(SparkBackend.scala:229)
at is.hail.backend.spark.SparkBackend.parse_table_ir(SparkBackend.scala:511)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)

Hail version: 0.2.39-ef87446bd1c7
Error summary: MalformedInputException: Input length = 1

Are you sure that file is a valid UTF-8 encoded bed file? If you open it in less does it have any unusual characters? If you have words or names using letters outside the latin alphabet, make sure the file is encoded in UTF-8. You may find this Super User thread helpful

this is the PLINK format BED, not UCSC, so it is a binary file and not human-readable via less

ahhh I see I should be using input_plink() I didnā€™t read down far enough in the Import/Export page

import_plink() worked great, sorry I didnā€™t see earlier

Import all autosome BED data

gen = hl.import_plink(bed[0],bim[0],fam[0],
reference_genome=ā€˜GRCh37ā€™,
contig_recoding=contig_dict)
for i in range(1,len(bed)):
gen = gen.union_rows(hl.import_plink(bed[i],bim[i],fam[i],
reference_genome=ā€˜GRCh37ā€™,
contig_recoding=contig_dict))

One last snag in the codeā€¦

After creating one big MatrixTable of plink data, I am getting a EOFException any time I try to query the # rows in the table. I am otherwise able to query # rows or describe the MatrixTable.

print(ā€˜Variants: %dā€™ % gen.count_rows())

Hail version: 0.2.39-ef87446bd1c7
Error summary: EOFException: Invalid seek offset: position value (-2147438338) must be >= 0 for ā€˜gs://{folder_name}/genotype_data/ukb_cal_chr1_v2.bedā€™

some more details. I cleared my notebook and started fresh, creating the matrix table in a different way:

first I created a list all_datasets all_datasets
gen = hl.MatrixTable.union_rows(*all_datasets)

I still get the EOFException when I attempt to count rows. It causes any row based operation to crash, like PCA

Hey @acererak, sorry youā€™re still having trouble.

In general, it is difficult to resolve issues without the full stack trace, so donā€™t feel afraid to post the whole thing. If you post it inbetween a pair of lines that contain three back-ticks, `, (and nothing else), it will be displayed in a truncated, readable way.

It looks like thereā€™s something wrong with either the path to the file or the file itself. Do these files load correctly in plink? Can you identify one file that causes this problem? Is that the exact error message, or did you modify it? If it is the exact error message, I suspect youā€™re missing an f in an f-string somewhere because you have gs://{folder_name} in your path.

Hi @danking thanks for taking the time to help sort out these issues, I appreciate it! I did modify the error message for anonymity, so unfortunately it is not as simple as an f-string error. These files do load correctly in PLINK. I have no such issue with the imputed dataset, just this regular genotype dataset.

Here is the code I ran to create my matrix table (I tried to do this multiple ways to see if that helped)

bed = [gen_folder+f'/ukb_cal_chr{contig}_v2.bed' for contig in list(range(1, 23))]
bim = [gen_folder+f'/ukb_cal_chr{contig}_v2.bim' for contig in list(range(1, 23))]
fam = [gen_folder+f'/ukb48065_cal_chr{contig}_v2_s488264.fam' for contig in list(range(1, 23))]
gen = hl.import_plink(bed[0],bim[0],fam[0],
                   reference_genome='GRCh37',
                   contig_recoding=contig_dict)
for i in range(1,len(bed)):
    gen = gen.union_rows(hl.import_plink(bed[i],bim[i],fam[i],
                                        reference_genome='GRCh37',
                                        contig_recoding=contig_dict))

Sorry, my message cutoff early. Anyway that runs great. But the second I try to do gen.count_rows() I get the following error message:

---------------------------------------------------------------------------
FatalError                                Traceback (most recent call last)
<ipython-input-17-3821437a4713> in <module>
----> 1 print('Variants: %d' % gen.count_rows())

<decorator-gen-1221> in count_rows(self, _localize)

/usr/local/lib/python3.7/dist-packages/hail/typecheck/check.py in wrapper(__original_func, *args, **kwargs)
    583     def wrapper(__original_func, *args, **kwargs):
    584         args_, kwargs_ = check_all(__original_func, args, kwargs, checkers, is_method=is_method)
--> 585         return __original_func(*args_, **kwargs_)
    586 
    587     return wrapper

/usr/local/lib/python3.7/dist-packages/hail/matrixtable.py in count_rows(self, _localize)
   2380         ir = TableCount(MatrixRowsTable(self._mir))
   2381         if _localize:
-> 2382             return Env.backend().execute(ir)
   2383         else:
   2384             return construct_expr(LiftMeOut(ir), hl.tint64)

/usr/local/lib/python3.7/dist-packages/hail/backend/spark_backend.py in execute(self, ir, timed)
    269     def execute(self, ir, timed=False):
    270         jir = self._to_java_value_ir(ir)
--> 271         result = json.loads(self._jhc.backend().executeJSON(jir))
    272         value = ir.typ._from_json(result['value'])
    273         timings = result['timings']

/usr/local/lib/python3.7/dist-packages/py4j/java_gateway.py in __call__(self, *args)
   1255         answer = self.gateway_client.send_command(command)
   1256         return_value = get_return_value(
-> 1257             answer, self.gateway_client, self.target_id, self.name)
   1258 
   1259         for temp_arg in temp_args:

/usr/local/lib/python3.7/dist-packages/hail/backend/spark_backend.py in deco(*args, **kwargs)
     36             raise FatalError('%s\n\nJava stack trace:\n%s\n'
     37                              'Hail version: %s\n'
---> 38                              'Error summary: %s' % (deepest, full, hail.__version__, deepest)) from None
     39         except pyspark.sql.utils.CapturedException as e:
     40             raise FatalError('%s\n\nJava stack trace:\n%s\n'

FatalError: EOFException: Invalid seek offset: position value (-2147438338) must be >= 0 for 'gs://my-datafolder/genotype_data/ukb_cal_chr1_v2.bed'

Java stack trace:
java.lang.RuntimeException: error while applying lowering 'InterpretNonCompilable'
	at is.hail.expr.ir.lowering.LoweringPipeline$$anonfun$apply$1.apply(LoweringPipeline.scala:26)
	at is.hail.expr.ir.lowering.LoweringPipeline$$anonfun$apply$1.apply(LoweringPipeline.scala:18)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
	at is.hail.expr.ir.lowering.LoweringPipeline.apply(LoweringPipeline.scala:18)
	at is.hail.expr.ir.CompileAndEvaluate$._apply(CompileAndEvaluate.scala:28)
	at is.hail.backend.spark.SparkBackend.is$hail$backend$spark$SparkBackend$$_execute(SparkBackend.scala:317)
	at is.hail.backend.spark.SparkBackend$$anonfun$execute$1.apply(SparkBackend.scala:304)
	at is.hail.backend.spark.SparkBackend$$anonfun$execute$1.apply(SparkBackend.scala:303)
	at is.hail.expr.ir.ExecuteContext$$anonfun$scoped$1.apply(ExecuteContext.scala:19)
	at is.hail.expr.ir.ExecuteContext$$anonfun$scoped$1.apply(ExecuteContext.scala:17)
	at is.hail.utils.package$.using(package.scala:600)
	at is.hail.annotations.Region$.scoped(Region.scala:18)
	at is.hail.expr.ir.ExecuteContext$.scoped(ExecuteContext.scala:17)
	at is.hail.backend.spark.SparkBackend.withExecuteContext(SparkBackend.scala:229)
	at is.hail.backend.spark.SparkBackend.execute(SparkBackend.scala:303)
	at is.hail.backend.spark.SparkBackend.executeJSON(SparkBackend.scala:323)
	at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
	at py4j.Gateway.invoke(Gateway.java:282)
	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
	at py4j.commands.CallCommand.execute(CallCommand.java:79)
	at py4j.GatewayConnection.run(GatewayConnection.java:238)
	at java.lang.Thread.run(Thread.java:748)

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, saturn-de77acfa-5905-4dc8-a7f2-6154b0091d20-w-44.c.company-research-and-development.internal, executor 1): java.io.EOFException: Invalid seek offset: position value (-2147438338) must be >= 0 for 'gs://my-datafolder/genotype_data/ukb_cal_chr1_v2.bed'
	at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.gcsio.GoogleCloudStorageReadChannel.validatePosition(GoogleCloudStorageReadChannel.java:703)
	at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.gcsio.GoogleCloudStorageReadChannel.position(GoogleCloudStorageReadChannel.java:597)
	at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFSInputStream.seek(GoogleHadoopFSInputStream.java:198)
	at org.apache.hadoop.fs.FSDataInputStream.seek(FSDataInputStream.java:65)
	at is.hail.io.fs.HadoopFS$$anon$2.seek(HadoopFS.scala:51)
	at is.hail.io.fs.WrappedSeekableDataInputStream.seek(FS.scala:26)
	at is.hail.io.plink.MatrixPLINKReader$$anonfun$8$$anonfun$apply$6$$anonfun$apply$8.apply(LoadPlink.scala:366)
	at is.hail.io.plink.MatrixPLINKReader$$anonfun$8$$anonfun$apply$6$$anonfun$apply$8.apply(LoadPlink.scala:359)
	at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
	at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
	at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
	at scala.collection.Iterator$class.foreach(Iterator.scala:891)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
	at is.hail.rvd.RVD$$anonfun$count$2.apply(RVD.scala:693)
	at is.hail.rvd.RVD$$anonfun$count$2.apply(RVD.scala:691)
	at is.hail.sparkextras.ContextRDD$$anonfun$cmapPartitions$1$$anonfun$apply$9.apply(ContextRDD.scala:205)
	at is.hail.sparkextras.ContextRDD$$anonfun$cmapPartitions$1$$anonfun$apply$9.apply(ContextRDD.scala:205)
	at is.hail.utils.richUtils.RichContextRDD$$anonfun$cleanupRegions$1$$anonfun$1.apply(RichContextRDD.scala:22)
	at is.hail.utils.richUtils.RichContextRDD$$anonfun$cleanupRegions$1$$anonfun$1.apply(RichContextRDD.scala:22)
	at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
	at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
	at is.hail.utils.richUtils.RichContextRDD$$anonfun$cleanupRegions$1$$anon$1.hasNext(RichContextRDD.scala:31)
	at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
	at scala.collection.Iterator$class.foreach(Iterator.scala:891)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
	at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
	at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
	at scala.collection.TraversableOnce$class.fold(TraversableOnce.scala:212)
	at scala.collection.AbstractIterator.fold(Iterator.scala:1334)
	at org.apache.spark.rdd.RDD$$anonfun$fold$1$$anonfun$20.apply(RDD.scala:1096)
	at org.apache.spark.rdd.RDD$$anonfun$fold$1$$anonfun$20.apply(RDD.scala:1096)
	at org.apache.spark.SparkContext$$anonfun$36.apply(SparkContext.scala:2157)
	at org.apache.spark.SparkContext$$anonfun$36.apply(SparkContext.scala:2157)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:123)
	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1890)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1878)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1877)
	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1877)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
	at scala.Option.foreach(Option.scala:257)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2111)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2060)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2049)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2158)
	at org.apache.spark.rdd.RDD$$anonfun$fold$1.apply(RDD.scala:1098)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
	at org.apache.spark.rdd.RDD.fold(RDD.scala:1092)
	at is.hail.rvd.RVD.count(RVD.scala:698)
	at is.hail.expr.ir.Interpret$$anonfun$run$1.apply$mcJ$sp(Interpret.scala:637)
	at is.hail.expr.ir.Interpret$$anonfun$run$1.apply(Interpret.scala:637)
	at is.hail.expr.ir.Interpret$$anonfun$run$1.apply(Interpret.scala:637)
	at scala.Option.getOrElse(Option.scala:121)
	at is.hail.expr.ir.Interpret$.run(Interpret.scala:637)
	at is.hail.expr.ir.Interpret$.alreadyLowered(Interpret.scala:53)
	at is.hail.expr.ir.InterpretNonCompilable$.interpretAndCoerce$1(InterpretNonCompilable.scala:16)
	at is.hail.expr.ir.InterpretNonCompilable$.is$hail$expr$ir$InterpretNonCompilable$$rewrite$1(InterpretNonCompilable.scala:53)
	at is.hail.expr.ir.InterpretNonCompilable$$anonfun$1.apply(InterpretNonCompilable.scala:25)
	at is.hail.expr.ir.InterpretNonCompilable$$anonfun$1.apply(InterpretNonCompilable.scala:25)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
	at scala.collection.AbstractTraversable.map(Traversable.scala:104)
	at is.hail.expr.ir.InterpretNonCompilable$.rewriteChildren$1(InterpretNonCompilable.scala:25)
	at is.hail.expr.ir.InterpretNonCompilable$.is$hail$expr$ir$InterpretNonCompilable$$rewrite$1(InterpretNonCompilable.scala:54)
	at is.hail.expr.ir.InterpretNonCompilable$$anonfun$1.apply(InterpretNonCompilable.scala:25)
	at is.hail.expr.ir.InterpretNonCompilable$$anonfun$1.apply(InterpretNonCompilable.scala:25)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
	at scala.collection.AbstractTraversable.map(Traversable.scala:104)
	at is.hail.expr.ir.InterpretNonCompilable$.rewriteChildren$1(InterpretNonCompilable.scala:25)
	at is.hail.expr.ir.InterpretNonCompilable$.is$hail$expr$ir$InterpretNonCompilable$$rewrite$1(InterpretNonCompilable.scala:54)
	at is.hail.expr.ir.InterpretNonCompilable$$anonfun$1.apply(InterpretNonCompilable.scala:25)
	at is.hail.expr.ir.InterpretNonCompilable$$anonfun$1.apply(InterpretNonCompilable.scala:25)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
	at scala.collection.AbstractTraversable.map(Traversable.scala:104)
	at is.hail.expr.ir.InterpretNonCompilable$.rewriteChildren$1(InterpretNonCompilable.scala:25)
	at is.hail.expr.ir.InterpretNonCompilable$.is$hail$expr$ir$InterpretNonCompilable$$rewrite$1(InterpretNonCompilable.scala:54)
	at is.hail.expr.ir.InterpretNonCompilable$$anonfun$1.apply(InterpretNonCompilable.scala:25)
	at is.hail.expr.ir.InterpretNonCompilable$$anonfun$1.apply(InterpretNonCompilable.scala:25)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
	at scala.collection.AbstractTraversable.map(Traversable.scala:104)
	at is.hail.expr.ir.InterpretNonCompilable$.rewriteChildren$1(InterpretNonCompilable.scala:25)
	at is.hail.expr.ir.InterpretNonCompilable$.is$hail$expr$ir$InterpretNonCompilable$$rewrite$1(InterpretNonCompilable.scala:54)
	at is.hail.expr.ir.InterpretNonCompilable$$anonfun$1.apply(InterpretNonCompilable.scala:25)
	at is.hail.expr.ir.InterpretNonCompilable$$anonfun$1.apply(InterpretNonCompilable.scala:25)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
	at scala.collection.AbstractTraversable.map(Traversable.scala:104)
	at is.hail.expr.ir.InterpretNonCompilable$.rewriteChildren$1(InterpretNonCompilable.scala:25)
	at is.hail.expr.ir.InterpretNonCompilable$.is$hail$expr$ir$InterpretNonCompilable$$rewrite$1(InterpretNonCompilable.scala:54)
	at is.hail.expr.ir.InterpretNonCompilable$.apply(InterpretNonCompilable.scala:58)
	at is.hail.expr.ir.lowering.InterpretNonCompilablePass$.transform(LoweringPass.scala:50)
	at is.hail.expr.ir.lowering.LoweringPass$$anonfun$apply$3$$anonfun$1.apply(LoweringPass.scala:15)
	at is.hail.expr.ir.lowering.LoweringPass$$anonfun$apply$3$$anonfun$1.apply(LoweringPass.scala:15)
	at is.hail.utils.ExecutionTimer.time(ExecutionTimer.scala:69)
	at is.hail.expr.ir.lowering.LoweringPass$$anonfun$apply$3.apply(LoweringPass.scala:15)
	at is.hail.expr.ir.lowering.LoweringPass$$anonfun$apply$3.apply(LoweringPass.scala:13)
	at is.hail.utils.ExecutionTimer.time(ExecutionTimer.scala:69)
	at is.hail.expr.ir.lowering.LoweringPass$class.apply(LoweringPass.scala:13)
	at is.hail.expr.ir.lowering.InterpretNonCompilablePass$.apply(LoweringPass.scala:45)
	at is.hail.expr.ir.lowering.LoweringPipeline$$anonfun$apply$1.apply(LoweringPipeline.scala:20)
	at is.hail.expr.ir.lowering.LoweringPipeline$$anonfun$apply$1.apply(LoweringPipeline.scala:18)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
	at is.hail.expr.ir.lowering.LoweringPipeline.apply(LoweringPipeline.scala:18)
	at is.hail.expr.ir.CompileAndEvaluate$._apply(CompileAndEvaluate.scala:28)
	at is.hail.backend.spark.SparkBackend.is$hail$backend$spark$SparkBackend$$_execute(SparkBackend.scala:317)
	at is.hail.backend.spark.SparkBackend$$anonfun$execute$1.apply(SparkBackend.scala:304)
	at is.hail.backend.spark.SparkBackend$$anonfun$execute$1.apply(SparkBackend.scala:303)
	at is.hail.expr.ir.ExecuteContext$$anonfun$scoped$1.apply(ExecuteContext.scala:19)
	at is.hail.expr.ir.ExecuteContext$$anonfun$scoped$1.apply(ExecuteContext.scala:17)
	at is.hail.utils.package$.using(package.scala:600)
	at is.hail.annotations.Region$.scoped(Region.scala:18)
	at is.hail.expr.ir.ExecuteContext$.scoped(ExecuteContext.scala:17)
	at is.hail.backend.spark.SparkBackend.withExecuteContext(SparkBackend.scala:229)
	at is.hail.backend.spark.SparkBackend.execute(SparkBackend.scala:303)
	at is.hail.backend.spark.SparkBackend.executeJSON(SparkBackend.scala:323)
	at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
	at py4j.Gateway.invoke(Gateway.java:282)
	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
	at py4j.commands.CallCommand.execute(CallCommand.java:79)
	at py4j.GatewayConnection.run(GatewayConnection.java:238)
	at java.lang.Thread.run(Thread.java:748)

java.io.EOFException: Invalid seek offset: position value (-2147438338) must be >= 0 for 'gs://my-datafolder/genotype_data/ukb_cal_chr1_v2.bed'
	at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.gcsio.GoogleCloudStorageReadChannel.validatePosition(GoogleCloudStorageReadChannel.java:703)
	at com.google.cloud.hadoop.repackaged.gcs.com.google.cloud.hadoop.gcsio.GoogleCloudStorageReadChannel.position(GoogleCloudStorageReadChannel.java:597)
	at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFSInputStream.seek(GoogleHadoopFSInputStream.java:198)
	at org.apache.hadoop.fs.FSDataInputStream.seek(FSDataInputStream.java:65)
	at is.hail.io.fs.HadoopFS$$anon$2.seek(HadoopFS.scala:51)
	at is.hail.io.fs.WrappedSeekableDataInputStream.seek(FS.scala:26)
	at is.hail.io.plink.MatrixPLINKReader$$anonfun$8$$anonfun$apply$6$$anonfun$apply$8.apply(LoadPlink.scala:366)
	at is.hail.io.plink.MatrixPLINKReader$$anonfun$8$$anonfun$apply$6$$anonfun$apply$8.apply(LoadPlink.scala:359)
	at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
	at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
	at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
	at scala.collection.Iterator$class.foreach(Iterator.scala:891)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
	at is.hail.rvd.RVD$$anonfun$count$2.apply(RVD.scala:693)
	at is.hail.rvd.RVD$$anonfun$count$2.apply(RVD.scala:691)
	at is.hail.sparkextras.ContextRDD$$anonfun$cmapPartitions$1$$anonfun$apply$9.apply(ContextRDD.scala:205)
	at is.hail.sparkextras.ContextRDD$$anonfun$cmapPartitions$1$$anonfun$apply$9.apply(ContextRDD.scala:205)
	at is.hail.utils.richUtils.RichContextRDD$$anonfun$cleanupRegions$1$$anonfun$1.apply(RichContextRDD.scala:22)
	at is.hail.utils.richUtils.RichContextRDD$$anonfun$cleanupRegions$1$$anonfun$1.apply(RichContextRDD.scala:22)
	at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
	at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
	at is.hail.utils.richUtils.RichContextRDD$$anonfun$cleanupRegions$1$$anon$1.hasNext(RichContextRDD.scala:31)
	at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
	at scala.collection.Iterator$class.foreach(Iterator.scala:891)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
	at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
	at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
	at scala.collection.TraversableOnce$class.fold(TraversableOnce.scala:212)
	at scala.collection.AbstractIterator.fold(Iterator.scala:1334)
	at org.apache.spark.rdd.RDD$$anonfun$fold$1$$anonfun$20.apply(RDD.scala:1096)
	at org.apache.spark.rdd.RDD$$anonfun$fold$1$$anonfun$20.apply(RDD.scala:1096)
	at org.apache.spark.SparkContext$$anonfun$36.apply(SparkContext.scala:2157)
	at org.apache.spark.SparkContext$$anonfun$36.apply(SparkContext.scala:2157)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:123)
	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)





Hail version: 0.2.39-ef87446bd1c7
Error summary: EOFException: Invalid seek offset: position value (-2147438338) must be >= 0 for 'gs://my-datafolder/genotype_data/ukb_cal_chr1_v2.bed'

And if I reconstruct the table without the CHR1 bed file, the last line changes to:

Hail version: 0.2.39-ef87446bd1c7
Error summary: EOFException: Invalid seek offset: position value (-2147438338) must be >= 0 for 'gs://my-dataset/genotype_data/ukb_cal_chr2_v2.bed'

Iā€™ve asked someone with more experience with import_plink to take a look at this. Sorry for the delay.

Thanks! I still havenā€™t been able to figure this one out. Maybe itā€™s actually a bug?

definitely a bug. can fix this afternoon.

1 Like