Running hail locally - number of cores

It is unclear from the docs currently how to correctly initialize hail to maximize performance. In my case I am running Hail locally on a large machine with 1TB of memory and 96 cores. 1) Does hail automatically use all the available memory? 2) Does hail automatically use all available cores? What do I need to adjust to make hail use all resources? Any particular configurations to pass to hail.init()?

Thank you

Hail running on Apache Spark in local mode will use all available cores by default, but you can load the Spark WebUI (the URL is printed in initialization) and look at the Executors pane to ensure everything is getting used. Using hl.init(master='local[96]') can explicitly request 96 local cores.

You should also make sure that you set memory with PYSPARK_SUBMIT_ARGS:

Thanks for the reply!

In the “Environment” tab it also says this under “Resource Profiles”:

Executor Reqs:
	cores: [amount: 1]
Task Reqs:
	cpus: [amount: 1.0]

In the Executors pane though it shows all cores. Does that mean it is all ok and it is using all cores?

Should be, yes!

Spark sometimes has issues with resource contention when running with such a high degree of multithreading in a single java process, so post here if anything else looks funky as you start running computations.