Transposing large dataset in Hail

How can I transpose my “large” dataset (0.73 million records & 5K features) in Hail?

I can transpose columns to rows in Spark Dataframe for “small” datasets,
For example:

X = sc.parallelize([(1,2,3), (4,5,6), (7,8,9)]).toDF()
X.show()
±–±–±–+
| _1| _2| _3|
±–±–±–+
| 1| 2| 3|
| 4| 5| 6|
| 7| 8| 9|
±–±–±–+

transpose(X).show()
±–±–±–+
| _1| _2| _3|
±–±–±–+
| 1| 4| 7|
| 2| 5| 8|
| 3| 6| 9|
±–±–±–+

There’s no random forest functionality currently in Hail, so is there a reason you can’t do this in PySpark? You can convert a Hail key table to a Spark dataframe easily (to_dataframe()).

I do suspect that a Spark dataframe with 700k columns is going to fall on its face immediately, though.

Yes I already have it done;

x = table.to_dataframe()

But couldn’t transpose it so far!
Well, I have to transpose my data to apply ranking and feature selection on SNPs and in this case I’ll have 700K+ SNPs.