Unable to export genotypes

Ah! We don’t think of HPC clusters as “clusters” because jobs are usually single processes. Are you submitting a single job to the HPC cluster which invokes Python/Hail?

In that case, you must set the environment variables described here: How do I increase the memory or RAM available to the JVM when I start Hail through Python? - #2 by danking . For example, if you want to use 8GiB of RAM:

export PYSPARK_SUBMIT_ARGS="--driver-memory 8g --executor-memory 8g pyspark-shell"

If you don’t set these environment variables, Spark defaults to a very tiny amount of memory (regardless of what you put in hl.init).