I’m also getting this same git install error when I try to compile. The pre-built version works for me but I’m looking to run Hail on my own spark cluster and the Getting Started guide states:
“For all other Spark clusters, you will need to build Hail from the source code.”
So can I use the prebuilt package on my own Spark Cluster? Sorry I’m just a little confused, please explain…
Thanks…
That sentence is misleading! You can totally use the distribution to run with a cluster, as long as the Spark version matches. Just start following the directions below the line:
You can then open an IPython shell which can run Hail backed by the cluster with the ipython command.