Failed to use Hail on cloud (aws)

Tried to use Hail on aws and didn’t work…

This is my template:

echo 'Installing pre-reqs'
          yum install -y g++ cmake git
          yum install -y lz4
          yum install -y lz4-devel
          git clone $HAIL_REPO
          cd hail/hail && git fetch && git checkout main
          ./gradlew clean
          make install HAIL_COMPILE_NATIVES=1
          make install HAIL_COMPILE_NATIVES=1 SCALA_VERSION=2.11.12 SPARK_VERSION=2.4.5
          cd build
          pip download decorator==4.2.1
          aws s3 cp decorator-4.2.1-py2.py3-none-any.whl s3://${RESOURCES_BUCKET}/artifacts/decorator.zip
          aws s3 cp distributions/hail-python.zip s3://${RESOURCES_BUCKET}/artifacts/
          aws s3 cp libs/hail-all-spark.jar s3://${RESOURCES_BUCKET}/artifacts/

Here are the error messages…

> Configure project :
WARNING: Hail primarily tested with Spark 3.1.1, use other versions at your own risk.

> Task :compileJava NO-SOURCE
> Task :compileScala
> Task :compileScala FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':compileScala'.
> Failed to run Gradle Worker Daemon
   > Process 'Gradle Worker Daemon 2' finished with non-zero exit value 137

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

BUILD FAILED in 2m 52s
1 actionable task: 1 executed
make: *** [build/libs/hail-all-spark.jar] Error 1

Don’t understand which part go wrong. Is someone having similar issue on aws?

error code 137 is often an out-of-memory crash, could try increasing memory available.