It is unclear from the docs currently how to correctly initialize hail to maximize performance. In my case I am running Hail locally on a large machine with 1TB of memory and 96 cores. 1) Does hail automatically use all the available memory? 2) Does hail automatically use all available cores? What do I need to adjust to make hail use all resources? Any particular configurations to pass to hail.init()?
Hail running on Apache Spark in local mode will use all available cores by default, but you can load the Spark WebUI (the URL is printed in initialization) and look at the Executors pane to ensure everything is getting used. Using hl.init(master='local[96]') can explicitly request 96 local cores.
You should also make sure that you set memory with PYSPARK_SUBMIT_ARGS:
Spark sometimes has issues with resource contention when running with such a high degree of multithreading in a single java process, so post here if anything else looks funky as you start running computations.