Hail Py4JError while calling z:is.hail.backend.spark.SparkBackend.executeJSON

Thanks for the quick response!

  1. The solution with docker does not suit me, because there is a problem with root rights that go beyond the container, and my system administrator will be against it.
    How can we deny Docker developers root privileges from their containers? - General Discussions - Docker Community Forums

I made some changes according to your edits…

  1. I commend out the fields you pointed to

# export HAIL_HOME=…
# export SPARK_CLASSPATH=…

  1. And add new line

PYSPARK_SUBMIT_ARGS=“–driver-memory 20G pyspark-shell”

  1. I received gcc-5 installation from the system administrator.

which gcc-5

/usr/bin/gcc-5

  1. Then I download gcc-5 for my hail environment in conda and add it to $PATH

export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/nfs/home/rskitchenko/anaconda2/envs/hail/bin/x86_64-unknown-linux-gnu-gcc-5.2.0
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/nfs/home/rskitchenko/anaconda2/envs/hail/gcc/share/gcc-5.2.0
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/nfs/home/rskitchenko/anaconda2/envs/hail/share/gcc-5.2.0

  1. I set my default gcc:

export CC=/usr/bin/gcc-5

But after sourcing I still have old-gcc.

gcc -v

gcc version 4.8.5 (Ubuntu 4.8.5-2ubuntu1~14.04.1)

  1. Now my .bashrc:

export CC=/usr/bin/gcc-5
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/nfs/home/rskitchenko/anaconda2/envs/hail/bin/x86_64-unknown-linux-gnu-gcc-5.2.0
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/nfs/home/rskitchenko/anaconda2/envs/hail/gcc/share/gcc-5.2.0
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/nfs/home/rskitchenko/anaconda2/envs/hail/share/gcc-5.2.0
export PYSPARK_SUBMIT_ARGS=“–driver-memory 20G pyspark-shell”

I already install hail by pip install hail, but I still get old libstdc++.so.
What should I do next? How to reinstall Hail or anaconda with gcc-5 libstdc++.so and etc?

Additional information:

ERROR: dlopen(“/tmp/libhail8343362620706188710.so”): /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version CXXABI_1.3.8' not found (required by /tmp/libhail8343362620706188710.so) FATAL: caught exception java.lang.UnsatisfiedLinkError: /tmp/libhail8343362620706188710.so: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version CXXABI_1.3.8’ not found (required by /tmp/libhail8343362620706188710.so)
java.lang.UnsatisfiedLinkError: /tmp/libhail8343362620706188710.so: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `CXXABI_1.3.8’ not found (required by /tmp/libhail8343362620706188710.so)

I check it, but … :

rskitchenko@horse:~$ strings /usr/lib/x86_64-linux-gnu/libstdc++.so.6 | grep CXXABI_1.3.8
CXXABI_1.3.8

All of your worker nodes also need the same version of the C and C++ standard libraries. Can you check that CXXABI_1.3.8 is present on every machine? This only applies if you have a cluster of machines. Are you working on a single machine?

I don’t think it is possible to get this message:

ERROR: dlopen("/tmp/libhail8343362620706188710.so"): /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version CXXABI_1.3.8' not found (required by /tmp/libhail8343362620706188710.so)
FATAL: caught exception java.lang.UnsatisfiedLinkError: /tmp/libhail8343362620706188710.so: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version CXXABI_1.3.8’ not found (required by /tmp/libhail8343362620706188710.so)

while also getting a result from this:

strings /usr/lib/x86_64-linux-gnu/libstdc++.so.6 | grep CXXABI_1.3.8

Is it possible the error message is coming from a different machine or the same machine in an environment with a different file system?

Are you working on a single machine?

I work on a cluster.

Is it possible the error message is coming from a different machine or the same machine in an environment with a different file system?

I don’t know how to check it(

GOOD NEWS! I did it! I have changed machine in the cluster, reinstalled dependencies, and now my problem solved in python and in ipython. But in jupyter notebook it is still doesn’t working :scream:

python and ipython console:

jupyter notebook:

Py4JError: An error occurred while calling o1.backend

with same error in console:

ERROR: dlopen(“/tmp/libhail8343362620706188710.so”): /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version CXXABI_1.3.8’ not found (required by /tmp/libhail8343362620706188710.so)
FATAL: caught exception java.lang.UnsatisfiedLinkError: /tmp/libhail8343362620706188710.so: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version CXXABI_1.3.8’ not found (required by /tmp/libhail8343362620706188710.so)

Do you have any ideas?

  1. jupyter and python are in PATH

(hail) rskitchenko@sphinx:~/.conda/envs/hail/bin$ which jupyter
/nfs/home/rskitchenko/.conda/envs/hail/bin/jupyter
(hail) rskitchenko@sphinx:~/.conda/envs/hail/bin$ which python
/nfs/home/rskitchenko/.conda/envs/hail/bin/python

  1. Interpreters in ipython console and jupyter notebook are the same and this is logical. I thought there was no difference between ipython and jupyter.

In [1]: import sys
In [2]: sys.executable
Out[2]: ‘/nfs/home/rskitchenko/.conda/envs/hail/bin/python’

Can you attach the hail log file? It should appear in the same directory that you started Jupyter.

It seems impossible :disappointed_relieved:
image
However, I can send it somewhere else.

Just email it to hail@broadinstitute.org

1 Like

Apologies. I’ve raised the limits for new users. You should be able to attach it now.

The log indicates that Spark is using local executors, not a cluster of executors. Is this your intention? I suspect that machine is the one that has an out of date version of the standard library.

I am having a similar issue. I Installed hail on a single machine and running Hadoop 2.7.7.
Any ideas of new installation suggestions would help.

FatalError: IllegalArgumentException: requirement failed

Blockquote

Java stack trace:
java.lang.IllegalArgumentException: requirement failed
at scala.Predef$.require(Predef.scala:212)
at is.hail.backend.spark.SparkBackend$.apply(SparkBackend.scala:192)
at is.hail.backend.spark.SparkBackend.apply(SparkBackend.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)

Hail version: 0.2.60-de1845e1c2f6
Error summary: IllegalArgumentException: requirement failed

what python script are you running to trigger this?