Hi folks,
I’ve been playing with the Hail platform on and off for a few months and I’m really excited about it.
I have a few questions since I found Zulip too hard to use:
-
Is there some standard back-end database system in use?
-
if I want to update the local image frequently, does that wipe-out my stored data?
-
Is there a way to load JSON formatted output from Nirvana in lieu of just VCF’s?
-
Do you have any info on the dockerized version of Hail?
-
I would like to make my local Hail accessible through API’s by using URL’s (can’t think of better way to describe at this hour). Is this on the roadmap?
-
Can you share a formula for estimating the data footprint of my test data?
-
Can you suggest a high-throughput way of loading phenotype data from an external source piece-meal, ie: add a few VCF’s and add a few phenotype tables, a pair per sample?
-
What best way would you suggest to mark variant entries as imputed vs genotyped?
Thanks you for your patience!
Daniel