That’s what I thought, too. Just out of curiosity, what is the Spark partition size required for Hail, is it 1G (1073741824)? The parquet.block.size parameter on the Hail tutorial is set to be 1TB. I am thinking maybe when I was trying to write vds out somehow hail pre-calculate the needed HDFS storage based on #partitions * block size * replication factor, causing it to give error of not enough space. Could that be the case?