Reading VDS from google bucket fires error


I am running the following commend:
In [4]:‘gs://data/gnomad.exomes.r2.0.1.sites.autosomes.vds’).count()
and getting FatalError: IOException: No FileSystem for scheme: gs
any idea why there is an issue? the error details are below.

In [5]:‘gs://data_gnomad_orielresearch/gnomad.exomes.r2.0.1.sites.autosomes.vds’).count()

FatalError Traceback (most recent call last)
in ()
----> 1‘gs://data_gnomad_orielresearch/gnomad.exomes.r2.0.1.sites.autosomes.vds’).count()

in read(self, path, sites_only, samples_only)

/home/eila/hail/python/hail/java.pyc in handle_py4j(func, *args, **kwargs)
111 raise FatalError(’%s\n\nJava stack trace:\n%s\n’
112 ‘Hail version: %s\n’
–> 113 ‘Error summary: %s’ % (deepest, full, Env.hc().version, deepest))
114 except py4j.protocol.Py4JError as e:
115 if e.args[0].startswith(‘An error occurred while calling’):

FatalError: IOException: No FileSystem for scheme: gs

Java stack trace: No FileSystem for scheme: gs
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(
at org.apache.hadoop.fs.FileSystem.createFileSystem(
at org.apache.hadoop.fs.FileSystem.access$200(
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(
at org.apache.hadoop.fs.FileSystem$Cache.get(
at org.apache.hadoop.fs.FileSystem.get(
at org.apache.hadoop.fs.Path.getFileSystem(
at is.hail.utils.richUtils.RichHadoopConfiguration$.fileSystem$extension(RichHadoopConfiguration.scala:17)
at is.hail.utils.richUtils.RichHadoopConfiguration$$anonfun$exists$extension$1.apply(RichHadoopConfiguratio
at is.hail.utils.richUtils.RichHadoopConfiguration$$anonfun$exists$extension$1.apply(RichHadoopConfiguratio
at scala.collection.IndexedSeqOptimized$class.prefixLengthImpl(IndexedSeqOptimized.scala:38)
at scala.collection.IndexedSeqOptimized$class.forall(IndexedSeqOptimized.scala:43)
at scala.collection.mutable.WrappedArray.forall(WrappedArray.scala:35)
at is.hail.utils.richUtils.RichHadoopConfiguration$.exists$extension(RichHadoopConfiguration.scala:51)
at is.hail.variant.VariantDataset$.readMetadata(VariantDataset.scala:103)
at is.hail.HailContext.readMetadata(HailContext.scala:394)
at is.hail.HailContext$$anonfun$readAllMetadata$1.apply(HailContext.scala:396)
at is.hail.HailContext$$anonfun$readAllMetadata$1.apply(HailContext.scala:396)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$
at is.hail.HailContext.readAllMetadata(HailContext.scala:396)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(
at sun.reflect.DelegatingMethodAccessorImpl.invoke(
at java.lang.reflect.Method.invoke(
at py4j.reflection.MethodInvoker.invoke(
at py4j.reflection.ReflectionEngine.invoke(
at py4j.Gateway.invoke(
at py4j.commands.AbstractCommand.invokeMethod(
at py4j.commands.CallCommand.execute(
Hail version: devel-a0e653f
Error summary: IOException: No FileSystem for scheme: gs
In [6]:

It looks like you’re not running on Google Dataproc. You need to be using dataproc to have access to files in Google buckets. Here’s a helpful guide Laurent put together for this:

My guess is either you’re running locally (a laptop, for example) or running directly on a cloud VM without using the dataproc cluster.

I am running on the master machine (ssh): eila@cluster-2-m:~/hail$
the data is copy on gnomAD
what am i missing? …

Running on the master machine through ssh is just like running on your laptop – it’s using the Spark local mode, which means you’re not only unable to see google buckets, but you’re also not using any of the other machines in the cluster.

Instead, use:

gcloud dataproc jobs submit pyspark

There is an example of this in the post I linked above.

To clarify, you’ll need to download the google cloud components to your computer to make this easier (if you haven’t done that and were starting the cluster from the UI). Then you can make a python script that uses Hail, and submit that to the cluster with gcloud dataproc jobs submit pyspark using the format in the post above.

thank you for the clarification. I hope that i am getting closer to have it running…
Looking for the moment that i can make it work with all the amazing gnomAD data.
should I copy the common-hail fisle to my bucket and point to them? see below the error message.

My step were the followings:

  1. installed gcloud on my mac + connected to the project

  1. created the following script,
    from hail import *
    hc = HailContext()
    vds =‘gs://data_gnomad_orielresearch/gnomad.exomes.r2.0.1.sites.autosomes.vds’).count()

  1. called gcloud dataproc jobs submit pyspark:
    wm8af-056:scripts landkof$ gcloud dataproc jobs submit pyspark --cluster=cluster-2 --files=gs://hail-common/hail-hail-is-master-all-spark2.0.2-E4880e9.jar --py-files=gs://hail-common/ --properties=“spark.driver.extraClassPath=./hail-hail-is-master-all-spark2.0.2-E4880e9.jar,spark.executor.extraClassPath=./hail-hail-is-master-all-spark2.0.2-E4880e9.jar”
    Copying file:// [Content-Type=text/x-python]…
    \ [1 files][ 166.0 B/ 166.0 B]
    Operation completed over 1 objects/166.0 B.
    Job [36170c29-6b31-4ba7-a90b-c1322853c8d7] submitted.

  1. returned value.
    the error message is related to the gs://hail-common files:

Waiting for job output…
=========== Cloud Dataproc Agent Error =========== File not found : gs://hail-common/
at org.apache.hadoop.fs.FileUtil.copy(
at org.apache.hadoop.fs.FileUtil.copy(
at org.apache.hadoop.fs.FileSystem.copyToLocalFile(
at org.apache.hadoop.fs.FileSystem.copyToLocalFile(
at org.apache.hadoop.fs.FileSystem.copyToLocalFile(
at java.util.concurrent.Executors$
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(
at java.util.concurrent.ScheduledThreadPoolExecutor$
at java.util.concurrent.ThreadPoolExecutor.runWorker(
at java.util.concurrent.ThreadPoolExecutor$
======== End of Cloud Dataproc Agent Error ========
ERROR: ( Job [36170c29-6b31-4ba7-a90b-c1322853c8d7] entered state [ERROR] while waiting for [DONE].

solved this with and using the latest id that was published
gsutil ls -l gs://hail-common/pyhail-hail-is-master-*.zip

Awesome! everything is working?

It is for sure running. The trial version is very limited with CPU - so the execution time is slow. will keep you updated.
Thank you for your help.
I hope to be able to contribute to the project soon.
Thanks again!