Normalized Likelihood w/ Respect to Percentile Grid

We have been implementing hail and running linear mixed models on several test datasets. (Specifically with this model: https://hail.is/docs/0.2/stats/hail.stats.LinearMixedModel.html#hail.stats.LinearMixedModel). One of the recommendations in the manual/tutorial (https://hail.is/docs/0.2/methods/stats.html?highlight=linear%20mixed%20model#hail.methods.linear_mixed_model) is that you:

"Sanity-check the normalized likelihood of h^2 over the percentile grid:

import matplotlib.pyplot as plt
plt.plot(range(101), model.h_sq_normalized_lkhd())"

Could you explain what examining heritability estimates with respect to ‘percentile grid’ reflects exactly? I’m not even confident I know what percentile or percentile grid is being referenced. :-/

Thanks!

Hail’s implementation of linear mixed models, while scalable, is hard to use and less efficient than implementations like Bolt-LMM and SAIGE. We’d actually recommend you use one of those, which will almost certainly provide a better experience. Most local users use Hail for everything up to LMM, then export a VCF or BGEN for those tools.

To answer your question, though, this line of the docs indicates that it may be useful to examine the likelihood distribution of h^2, because a relatively flat distribution may be cause for alarm. We’d hope for a sharp peak of likelihood density at the estimated h^2, and little density elsewhere.

1 Like

For clarification on what you mean by ‘a better experience’, is there anything wrong with using the linear mixed model in HAIL?

We’re strongly considering deprecating it and removing it from future versions. It shouldn’t give you incorrect results, but it’s really not easy to use and it’s very hard for our team to support since the person who implemented LMMs in Hail is no longer working on the project.