Error running Liftover of UK Biobank

The following error occurs when trying to liftover GRCh37 to GRCh38 for chr22 of the UK Biobank GWAS. Any suggestions for this?

The code used is:

chr=sys.argv[1]

bgen="/project/ukbiobank/imp/uk.v3/bgen/ukb_imp_chr"+chr+"_v3.bgen"
sample="/project/ukbiobank/imp/uk.v3/bgen/ukb19416_imp_chr"+chr+"_v3_s487327.sample"
# hl.index_bgen(bgen)
mt=hl.import_bgen(bgen,sample_file=sample,entry_fields=['GT','GP','dosage'])
print(mt.describe())
rg37 = hl.get_reference('GRCh37')
rg38 = hl.get_reference('GRCh38')
rg37.add_liftover('file:///restricted/projectnb/ukbiobank/ad/analysis/liftover/grch37_to_grch38.over.chain.gz', rg38)
mt = mt.annotate_rows(new_locus=hl.liftover(mt.locus, 'GRCh38'), old_locus=mt.locus)
mt = mt.filter_rows(hl.is_defined(mt.new_locus))
print(mt.describe())
mt = mt.key_rows_by(locus=mt.new_locus,alleles=mt.alleles)
print(mt.describe())
hl.export_vcf(mt,"/project/ukbiobank/imp/uk.v3.GRCh38/uk.v3.r38.chr"+chr+".vcf.bgz")

Version 0.2.19-c6ec8b76eb26
LOGGING: writing to /restricted/projectnb/ukbiobank/ad/analysis/liftover/hail-20200214-1434-0.2.19-c6ec8b76eb26.log
2020-02-14 14:35:19 Hail: INFO: Number of BGEN files parsed: 1
2020-02-14 14:35:19 Hail: INFO: Number of samples in BGEN files: 487409
2020-02-14 14:35:19 Hail: INFO: Number of variants across all BGEN files: 1255683

Global fields:

None

Column fields:

‘s’: str

Row fields:

‘locus’: locus
‘alleles’: array
‘rsid’: str
‘varid’: str
‘new_locus’: locus
‘old_locus’: locus

Entry fields:

‘GT’: call
‘GP’: array
‘dosage’: float64

Column key: [‘s’]

Row key: [‘locus’, ‘alleles’]

None

2020-02-14 14:35:22 Hail: WARN: export_vcf: ignored the following fields:
'varid' (row)
'new_locus' (row)
'old_locus' (row)
[Stage 0:======================================================>(292 + 1) / 293]2020-02-14 14:38:52 Hail: INFO: Ordering unsorted dataset with netw
2020-02-14 14:38:52 Hail: WARN: export_vcf found no row field 'info'. Emitting no INFO fields.
[Stage 2:> (0 + 168) / 279][Stage 2:> (14 + 1) / 293]Traceback (most recent call last):
File "/restricted/projectnb/ukbiobank/ad/analysis/liftover/liftover.py", line 29, in
hl.export_vcf(mt,"/project/ukbiobank/imp/uk.v3.GRCh38/uk.v3.r38.chr"+chr+".vcf.bgz")
File "</share/pkg.7/hail/0.2.19/install/python/lib/python3.6/site-packages/decorator.py:decorator-gen-1289>", line 2, in export_vcf
File "/share/pkg.7/hail/0.2.19/install/hail/hail/build/distributions/hail-python.zip/hail/typecheck/check.py", line 585, in wrapper
File "/share/pkg.7/hail/0.2.19/install/hail/hail/build/distributions/hail-python.zip/hail/methods/impex.py", line 513, in export_vcf
File "/share/pkg.7/hail/0.2.19/install/hail/hail/build/distributions/hail-python.zip/hail/backend/backend.py", line 108, in execute
File "/share/pkg.7/spark/2.4.3/install/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in  **call**
File "/share/pkg.7/hail/0.2.19/install/hail/hail/build/distributions/hail-python.zip/hail/utils/java.py", line 221, in deco
hail.utils.java.FatalError: SparkException: Job aborted due to stage failure: ResultStage 2 (runJob at SparkHadoopWriter.scala:78) has failed the mmChunkId{streamId=830947795015, chunkIndex=0}: java.nio.file.NoSuchFileException: /data03/hadoop/yarn/local/usercache/farrell/appcache/application_ion(UnixException.java:86) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at sun.nio.fs.UnixException.rethrow.nio.file.Files.newByteChannel(Files.java:361) at java.nio.file.Files.newByteChannel(Files.java:407) at org.apache.spark.shuffle.IndexShuffleBloa:382) at org.apache.spark.network.netty.NettyBlockRpcServer$$anonfun$1.apply(NettyBlockRpcServer.scala:61) at org.apache.spark.network.netty.Na.collection.convert.Wrappers$IteratorWrapper.next(Wrappers.scala:31) at org.apache.spark.network.server.OneForOneStreamManager.getChunk(OneForOn) at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:101) at org.apache.spark.network.server.Read(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.jdler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(Abstrac348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.codec.MRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.jpark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85) at io.netty.channel.AbstractChannelHandlerContext.invokeChalerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.nettyxt.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelnnel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138) at io.netty.channel.nio.NioEventLoop.processSelectedKey(Nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at io.nettyy$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:745) at org.apache.spark.storage.ShuffleIterator.next(ShuffleBlockFetcherIterator.scala:485) at org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scscala:441) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at org.apache.spark.util.CompletionIterator.hasNext(Completl.collection.ExternalSorter.insertAll(ExternalSorter.scala:199) at org.apache.spark.shuffle.BlockStoreShuffleReader.read(BlockStoreShuffleRoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.st org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) t org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.sparkapache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.sparkitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDDputeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.rdd.MapPartitionsRDD.compute(MaD.scala:288) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpointla:90) at org.apache.spark.scheduler.Task.run(Task.scala:121) at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:4cala:414) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$ailureException: Failure while fetching StreamChunkId{streamId=830947795015, chunkIndex=0}: java.nio.file.NoSuchFileException: /data03/hadoop/yarn/t sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.javannel(UnixFileSystemProvider.java:214) at java.nio.file.Files.newByteChannel(Files.java:361) at java.nio.file.Files.newByteChannel(Files.java:40rage.BlockManager.getBlockData(BlockManager.scala:382) at org.apache.spark.network.netty.NettyBlockRpcServer$$anonfun$1.apply(NettyBlockRpcServer.rator$$anon$11.next(Iterator.scala:410) at scala.collection.convert.Wrappers$IteratorWrapper.next(Wrappers.scala:31) at org.apache.sparkler.processFetchRequest(TransportRequestHandler.java:130) at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHetty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerCactChannelHandlerContext.java:340) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at io.netty.channelxt.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHaetty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerCactChannelHandlerContext.java:340) at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85) at nnelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(A1359) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.Abstracead(DefaultChannelPipeline.java:935) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138) at ed(NioEventLoop.java:580) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at io.netty.channel.nio.Nio at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thrpark.network.server.TransportChannelHandler.channelRead(TransportChannelHandler.java:120) at io.netty.channel.AbstractChannelHandlerContext.innelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at eChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerCetty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) at io.netty.channel.AbstractChannelHandlerContext.innelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at lerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(Abstrac0) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at io.netty.channel.AbstractChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at ocessSelectedKey(NioEventLoop.java:645) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at ava:459) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent

Java stack trace:
org.apache.spark.SparkException: Job aborted.
at org.apache.spark.internal.io.SparkHadoopWriter$.write(SparkHadoopWriter.scala:100)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1096)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1094)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1094)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopDataset(PairRDDFunctions.scala:1094)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply$mcV$sp(PairRDDFunctions.scala:1067)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:1032)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:1032)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:1032)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$3.apply$mcV$sp(PairRDDFunctions.scala:1013)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$3.apply(PairRDDFunctions.scala:1013)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$3.apply(PairRDDFunctions.scala:1013)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:1012)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$2.apply$mcV$sp(PairRDDFunctions.scala:970)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$2.apply(PairRDDFunctions.scala:968)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$2.apply(PairRDDFunctions.scala:968)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:968)
at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$2.apply$mcV$sp(RDD.scala:1517)
at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$2.apply(RDD.scala:1505)
at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$2.apply(RDD.scala:1505)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.saveAsTextFile(RDD.scala:1505)
at is.hail.utils.richUtils.RichRDD$.writeTable$extension(RichRDD.scala:66)
at is.hail.io.vcf.ExportVCF$.apply(ExportVCF.scala:474)
at is.hail.expr.ir.MatrixVCFWriter.apply(MatrixWriter.scala:48)
at is.hail.expr.ir.WrappedMatrixWriter.apply(MatrixWriter.scala:24)
at is.hail.expr.ir.Interpret$.apply(Interpret.scala:743)
at is.hail.expr.ir.Interpret$.apply(Interpret.scala:91)
at is.hail.expr.ir.CompileAndEvaluate$$anonfun$1.apply(CompileAndEvaluate.scala:33)
at is.hail.utils.ExecutionTimer.time(ExecutionTimer.scala:24)
at is.hail.expr.ir.CompileAndEvaluate$.apply(CompileAndEvaluate.scala:33)
at is.hail.backend.Backend$$anonfun$execute$1.apply(Backend.scala:86)
at is.hail.backend.Backend$$anonfun$execute$1.apply(Backend.scala:86)
at is.hail.expr.ir.ExecuteContext$$anonfun$scoped$1.apply(ExecuteContext.scala:8)
at is.hail.expr.ir.ExecuteContext$$anonfun$scoped$1.apply(ExecuteContext.scala:7)
at is.hail.utils.package$.using(package.scala:596)
at is.hail.annotations.Region$.scoped(Region.scala:18)
at is.hail.expr.ir.ExecuteContext$.scoped(ExecuteContext.scala:7)
at is.hail.backend.Backend.execute(Backend.scala:86)
at is.hail.backend.Backend.executeJSON(Backend.scala:92)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:745)

org.apache.spark.SparkException: Job aborted due to stage failure: ResultStage 2 (runJob at SparkHadoopWriter.scala:78) has failed the maximum alloreamId=830947795015, chunkIndex=0}: java.nio.file.NoSuchFileException: /data03/hadoop/yarn/local/usercache/farrell/appcache/application_15657888296Exception.java:86) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at sun.nio.fs.UnixException.rethrowAsIOExcee.Files.newByteChannel(Files.java:361) at java.nio.file.Files.newByteChannel(Files.java:407) at org.apache.spark.shuffle.IndexShuffleBlockResolvt org.apache.spark.network.netty.NettyBlockRpcServer$$anonfun$1.apply(NettyBlockRpcServer.scala:61) at org.apache.spark.network.netty.NettyBloction.convert.Wrappers$IteratorWrapper.next(Wrappers.scala:31) at org.apache.spark.network.server.OneForOneStreamManager.getChunk(OneForOneStreamMt org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:101) at org.apache.spark.network.server.TransportractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)eout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelt io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.handler.codec.MessageTotractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)work.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelReadxt.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.channeleChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerC.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLootLoop.processSelectedKeys(NioEventLoop.java:497) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) at io.netty.util.cotRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.java:745) at org.apache.spark.storage.ShuffleBlockFet.next(ShuffleBlockFetcherIterator.scala:485) at org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:64) 1) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at org.apache.spark.util.CompletionIterator.hasNext(CompletionIteration.ExternalSorter.insertAll(ExternalSorter.scala:199) at org.apache.spark.shuffle.BlockStoreShuffleReader.read(BlockStoreShuffleReader.sc.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.rdd.Mappark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iteratoadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartiti288) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scat org.apache.spark.scheduler.Task.run(Task.scala:121) at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408) at ) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.rception: Failure while fetching StreamChunkId{streamId=830947795015, chunkIndex=0}: java.nio.file.NoSuchFileException: /data03/hadoop/yarn/local/usnio.fs.UnixException.translateToIOException(UnixException.java:86) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) xFileSystemProvider.java:214) at java.nio.file.Files.newByteChannel(Files.java:361) at java.nio.file.Files.newByteChannel(Files.java:407) at ckManager.getBlockData(BlockManager.scala:382) at org.apache.spark.network.netty.NettyBlockRpcServer$$anonfun$1.apply(NettyBlockRpcServer.scala:61non$11.next(Iterator.scala:410) at scala.collection.convert.Wrappers$IteratorWrapper.next(Wrappers.scala:31) at org.apache.spark.networkessFetchRequest(TransportRequestHandler.java:130) at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.jnnel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.ielHandlerContext.java:340) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at io.netty.channel.AbstraceChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerConnnel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.ielHandlerContext.java:340) at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85) at io.nettylerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractCt io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelultChannelPipeline.java:935) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138) at io.nettyentLoop.java:580) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) at io.netty.channel.nio.NioEventLoot io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138) at java.lang.Thread.run(Thread.javawork.server.TransportChannelHandler.channelRead(TransportChannelHandler.java:120) at io.netty.channel.AbstractChannelHandlerContext.invokeChalerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.nettyRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.jdler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) at io.netty.channel.AbstractChannelHandlerContext.invokeChalerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at org.apacxt.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelt io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) at io.netty.channel.AbstractChannelHandlerCtractChannelHandlerContext.java:348) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) at io.nettyectedKey(NioEventLoop.java:645) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) at io.netty at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at io.netty.util.concurrent.Default
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1889)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1877)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1876)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1876)
at org.apache.spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:1493)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2107)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2059)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2048)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2114)
at org.apache.spark.internal.io.SparkHadoopWriter$.write(SparkHadoopWriter.scala:78)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1096)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1094)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1094)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopDataset(PairRDDFunctions.scala:1094)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply$mcV$sp(PairRDDFunctions.scala:1067)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:1032)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:1032)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:1032)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$3.apply$mcV$sp(PairRDDFunctions.scala:1013)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$3.apply(PairRDDFunctions.scala:1013)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$3.apply(PairRDDFunctions.scala:1013)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:1012)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$2.apply$mcV$sp(PairRDDFunctions.scala:970)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$2.apply(PairRDDFunctions.scala:968)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$2.apply(PairRDDFunctions.scala:968)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:968)
at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$2.apply$mcV$sp(RDD.scala:1517)
at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$2.apply(RDD.scala:1505)
at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$2.apply(RDD.scala:1505)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.saveAsTextFile(RDD.scala:1505)
at is.hail.utils.richUtils.RichRDD$.writeTable$extension(RichRDD.scala:66)
at is.hail.io.vcf.ExportVCF$.apply(ExportVCF.scala:474)
at is.hail.expr.ir.MatrixVCFWriter.apply(MatrixWriter.scala:48)
at is.hail.expr.ir.WrappedMatrixWriter.apply(MatrixWriter.scala:24)
at is.hail.expr.ir.Interpret$.apply(Interpret.scala:743)
at is.hail.expr.ir.Interpret$.apply(Interpret.scala:91)
at is.hail.expr.ir.CompileAndEvaluate$$anonfun$1.apply(CompileAndEvaluate.scala:33)
at is.hail.utils.ExecutionTimer.time(ExecutionTimer.scala:24)
at is.hail.expr.ir.CompileAndEvaluate$.apply(CompileAndEvaluate.scala:33)
at is.hail.backend.Backend$$anonfun$execute$1.apply(Backend.scala:86)
at is.hail.backend.Backend$$anonfun$execute$1.apply(Backend.scala:86)
at is.hail.expr.ir.ExecuteContext$$anonfun$scoped$1.apply(ExecuteContext.scala:8)
at is.hail.expr.ir.ExecuteContext$$anonfun$scoped$1.apply(ExecuteContext.scala:7)
at is.hail.utils.package$.using(package.scala:596)
at is.hail.annotations.Region$.scoped(Region.scala:18)
at is.hail.expr.ir.ExecuteContext$.scoped(ExecuteContext.scala:7)
at is.hail.backend.Backend.execute(Backend.scala:86)
at is.hail.backend.Backend.executeJSON(Backend.scala:92)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:745)

I’m sorry you’re encountering this. Before anything else, I encourage you to upgrade to a recent version of Hail (we’re up to 0.2.33 now!). Some googling suggests this can occur when mixing Spark versions, maybe try setting the Spark property spark.shuffle.useOldFetchProtocol=true SPARK-29435 SPARK-27665. It also seems possible there is a file system issue:

java.nio.file.NoSuchFileException: /data03/hadoop/yarn/local/usercache/farrell/appcache/application_ion

Does that file system delete old files or perhaps have a quota?

One other thing I might try is write then read a matrix table before exporting to vcf. I generally get worried about export_vcf being slow and prefer to get the hard stuff (shuffling) out of the way before I export.

We upgraded to 2.33. The error message disappeared but the job now seems to be hanging near the end. It has been stuck with running 4 tasks for a couple days. This run includes saving it to a matrix table before the export.

  / /_/ /__  __/ /
   / __  / _ `/ / /
  /_/ /_/\_,_/_/_/   version 0.2.33-5d8cae649505
LOGGING: writing to /restricted/projectnb/ukbiobank/ad/analysis/liftover/hail-20200313-0100-0.2.33-5d8cae649505.log
2020-03-13 01:01:35 Hail: INFO: Number of BGEN files parsed: 1
2020-03-13 01:01:35 Hail: INFO: Number of samples in BGEN files: 487409
2020-03-13 01:01:35 Hail: INFO: Number of variants across all BGEN files: 1261158
----------------------------------------
Global fields:
    None
----------------------------------------
Column fields:
    's': str
----------------------------------------
Row fields:
    'locus': locus<GRCh37>
    'alleles': array<str>
    'rsid': str
    'varid': str
----------------------------------------
Entry fields:
    'GT': call
    'GP': array<float64>
    'dosage': float64
----------------------------------------
Column key: ['s']
Row key: ['locus', 'alleles']
----------------------------------------
None
----------------------------------------
Global fields:
    None
----------------------------------------
Column fields:
    's': str
----------------------------------------
Row fields:
    'locus': locus<GRCh37>
    'alleles': array<str>
    'rsid': str
    'varid': str
    'new_locus': locus<GRCh38>
    'old_locus': locus<GRCh37>
----------------------------------------
Entry fields:
    'GT': call
    'GP': array<float64>
    'dosage': float64
----------------------------------------
Column key: ['s']
Row key: ['locus', 'alleles']
----------------------------------------
None
----------------------------------------
Global fields:
    None
----------------------------------------
Column fields:
    's': str
----------------------------------------
Row fields:
    'locus': locus<GRCh38>
    'alleles': array<str>
    'rsid': str
    'varid': str
    'new_locus': locus<GRCh38>
    'old_locus': locus<GRCh37>
----------------------------------------
Entry fields:
    'GT': call
    'GP': array<float64>
    'dosage': float64
----------------------------------------
Column key: ['s']
Row key: ['locus', 'alleles']
----------------------------------------
None
[Stage 0:======================================================>(288 + 4) / 292]

We’ve had some issues with Spark recently, and turning on spark speculation (to double-schedule slow tasks) can help.

Also, if you’re writing and exporting, make sure you do mt = hl.read_matrix_table(what_youve_just_written) before the export to make it much faster.

The recent version of Hail .0.2.46-6ef64c08b000 fixed this issue.

However, the Hail liftover does not handle the change in strand between 37 and 38. It is only updating the locus and not the alleles. Crossmap for example updates the alleles… Shouldn’t the allele ref alt update be part of the functionality of the Hail liftover.

John

Hi John,

We do not update the alleles. I think of Hail’s liftover as a function that gets you a 1:1 mapping from old locus to new locus along with the information of whether the strand flipped. It is not an all inclusive harmonization method and doesn’t do anything with the alleles. You can use the FASTA reference function to check the reference allele at the new position and verify it’s the same allele. However, it’s unclear what to do if it’s not the same allele. I’d probably throw that variant out. If the strand flipped, then you can write code to flip the alleles. I think the gnomAD team has experience with that.

The documentation for loading the FASTA file is here.
The documentation for getting the reference allele is here.

Best,
Jackie

I fully support Jackie’s reply on this. If you are seeing flagged alleles, much like in plink, you’d have to interrogate the allele on whether you’d want to keep it or not. If it is as easy as flipping it based on strandedness, that’s an easy fix. Otherwise, throwing it out would be safer (but you’d lose some data).
Alleles are usually never updated in liftover functions, only locations of whether they map onto the genome version you are lifting over onto.

The UCSC binary liftover was developed for bed files. The Hail liftover is lifting over VCFs which includes a ref and alt field sequence. Tools such as the GATK Liftover and Crossmap which operate on VCFs do check the ref and alt against the new reference and reverse complement if necessary and make other adjustments.

This tool adjusts the coordinates of variants within a VCF file to match a new reference. The output file will be sorted and indexed using the target reference build. To be clear, REFERENCE_SEQUENCE should be the target reference build (that is, the “new” one). The tool is based on the UCSC LiftOver tool (see http://genome.ucsc.edu/cgi-bin/hgLiftOver) and uses a UCSC chain file to guide its operation. For each variant, the tool will look for the target coordinate, reverse-complement and left-align the variant if needed, and, in the case that the reference and alternate alleles of a SNP have been swapped in the new genome build, it will adjust the SNP, and correct AF-like INFO fields and the relevant genotypes.

http://crossmap.sourceforge.net/

CrossMap detects if a VCF mapping was inverted, and if so reverse complements the alliterative allele

Here is the relevant code from crossmap…

update ref allele
target_chr = update_chromID(refFasta.references[0], target_chr)
fields[3] = refFasta.fetch(target_chr,target_start,target_end).upper()

			if a[1][3] == '-':
				fields[4] = revcomp_DNA(fields[4], True)
			
			if fields[3] != fields[4]:
				print('\t'.join(map(str, fields)), file=FILE_OUT)
			else:
			   print (line + "\tFail(REF==ALT)", file=UNMAP)
			   fail += 1