Hail 0.1 is live!

Hail is 0.1! What does that mean?

0.1 is Hail’s first stable release. “Stable release” can connote a lot of things, so here’s what we mean:

  • We will not be changing any interfaces in this version. Thanks to all of you for your patience as we broke your Hail scripts every week!
  • We will support this version by fixing bugs and answering questions about its interfaces until it is deprecated. Deprecation will probably happen 3 months after the next stable version (0.2) is released.
  • We will continue to add features (without changing existing interfaces!) to this version for now. This will probably stop in a month or so, as we shift to building features against changing development code for stable release in 0.2.
  • We will continue automatically deploying new builds to the gs://hail-common bucket. We may change how this is done in the next few days, though, so keep an eye on this forum!

Changes in 0.1

We’ve been saving up a pile of changes in the last month or so, in order not to break your scripts every few days. These changes all appear at once in the new 0.1 build. Here they are:

KeyTables: your new best friend for annotation and filtering

Pedigree as a first-class object

  • We’ve added first-class Python objects for Pedigree and Trio. Read fam files with the static read method.
  • The mendel_errors and tdt methods now take a pedigree as an argument, instead of a fam file.

Regression

  • Removed counts from LMM regression variant annotations. See the new schema in the docs here.
  • Added dosage option to logreg.

GRM

  • The grm method now returns a KinshipMatrix object, which has new methods to export to all of the old GRM PLINK formats. grm no longer takes a format argument.

Dosage

  • We used the term “dosage” in a very confusing (and incorrect) way. g.dosage() in the expression language is now the expected number of alternate alleles given the genotype probabilities. g.gp() is the linear-scaled genotype probabilities from which dosage is computed. This does not involve changes to the regression interfaces, though using g.dosage() for a variant as a covariate will now be correct.

Count

  • We changed the count method. It no longer takes a genotypes boolean parameter or returns a dict: instead, it returns a tuple of (number of samples, number of variants).
  • We added the summarize method, which is an excellent way to get a broad sense of a dataset’s contents.

Filter variants with Python objects

concordance and mendel_errors return key tables

  • concordance and mendel_errors now both return KeyTable objects. concordance used to return the global statistics as a 2d list, and two variant datasets. mendel_errors used to write four files. The docs explain these things clearly.

TakeBy

  • The takeBy aggregator now takes elements from smallest to largest (used to be largest to smallest, but this was not clearly documented).

Method renames

  • HailContext.import_keytable => import_table
  • HailContext.read_keytable => read_table
  • VariantDataset.annotate_global_py => annotate_global
  • VariantDataset.from_keytable => from_table
  • VariantDataset.variants_keytable => variants_table
  • VariantDataset.samples_keytable => samples_table
  • VariantDataset.genotypes_keytable => genotypes_table
  • VariantDataset.filter_variants_intervals => filter_intervals (some functionality moved to filter_variants_table)
  • KeyTable.column_names => columns
  • KeyTable.num_rows => count
  • Plus: Arguments are renamed in many functions (“code” is replaced with “expr” most places).

Removed functionality

  • aggregate_intervals: this method is easy to implement in a few lines of KeyTable methods now.
  • annotate_global_table: rarely used functionality mostly replaced by KeyTable support. It is still possible to put tables into global annotations with KeyTable.collect() and VariantDataset.annotate_global.
  • annotate_global_list: same as above.
  • IntervalTree. This object is totally obviated by the deep support for Interval-keyed KeyTables.
3 Likes