Thursday, November 13, 2014

Bayesian diversification models (2)

Concerning my last post, I realize that my notation for rejection sampling models is a bit elliptic. In particular, the rejection step corresponding to the fact that we condition on the observed data is never explicitly written. So, let me just reformulate the technical arguments and restate the main idea.

In the general case, i.e. for any Bayesian model, the rejection sampling algorithm could be written as follows:

   sample $\theta$ from prior
   simulate data given $\theta$
UNTIL simulated data match observed data $D$
return $\theta$

What this algorithm returns is one single value of $\theta$ sampled from the posterior distribution:

$p(\theta \mid D) \propto p(\theta) p(D \mid \theta)$.

Of course, you can imagine that one would typically run this algorithm many times and then use the resulting sample of independent draws from the posterior to estimate posterior mean, median, credibility intervals, etc.

Note that I use upper case letters for the REPEAT-UNTIL loop corresponding to the conditioning on empirical data -- to distinguish it from other repeat-until loops that might also be invoked, just because of the data acquisition protocol.

Now, in the case of a diversification model, we have to account for the fact that we observe only surviving species clades. In addition, we imagine that there is some natural variation among clades in diversification rates, represented by a distribution $\phi$ on the diversification parameter $\theta = (\lambda, \mu)$, such that, each clade produced by Nature would have its speciation and extinction rates independently sampled from $\phi$.

Thus, altogether, our complete rejection sampling would look like:

      Nature chooses $\theta$ from $\phi$
      Nature runs diversification process given $\theta$
   until process survives
UNTIL simulated phylogeny matches observed phylogeny $T$
return $\theta$

We now have two repeat-until loops: the inner one represents the fact that, by construction, we only consider non extinct groups. The outer one represents our conditioning on the observed phylogeny $T$. However, this could just as well be written as:

   Nature chooses $\theta$ from $\phi$
   Nature runs diversification process given $\theta$
UNTIL simulated phylogeny matches observed phylogeny
return $\theta$

simply because an empty tree will never match our observed phylogeny anyway. Thus, we end up with a value of $\theta$ sampled from a very simple posterior distribution, one that does not invoke any conditional likelihood (this was my equation 1 in the previous post):

(1)    $p(\theta \mid T) \propto \phi(\theta) \, p(T \mid \theta) $

This posterior distribution combines a prior not conditional on survival $\phi(\theta)$, with a likelihood also not conditional on survival $p(T \mid \theta)$.

Now, you can always rewrite this as:

$p(\theta \mid T) \propto \phi(\theta) p(S \mid \theta) \, p(T \mid \theta) / p(S \mid \theta)$

or, equivalently:

(2)   $p(\theta \mid T) \propto p(\theta \mid S) \, p(T \mid \theta, S)$

where $p(S \mid \theta)$ is the probability of survival of the clade. Thus, we now have a prior conditional on survival together with a likelihood also conditional on survival. Equation 1 and 2 are equivalent, and you can choose between them, fundamentally, based on which prior information you have (fossils: equation 1, other neontological studies: equation 2).

But in any case, you never get, under this algorithmic model, an unconditional prior combined with a conditional likelihood. The only way to get this combination is by considering another model:

   choose $\theta$ from some prior $p(\theta)$
      Nature runs diversification process given $\theta$
   until process survives
UNTIL simulated phylogeny matches observed phylogeny

which basically corresponds to the case where Nature has produced many clades, all under the exact same diversification rates (same parameter vector $\theta$), and we have randomly sampled one among the surviving clades thus produced. Here, we still draw $\theta$ from some prior distribution $p(\theta)$, but this distribution now merely represents our prior uncertainty about the fixed value of $\theta$ 'chosen by Nature' -- not the natural variation of $\theta$ among groups. So, here, we cannot collapse the two repeat-until loops, we have to keep the two: an internal loop corresponding to the actual 'repeated natural experiment', under fixed but unknown $\theta$, and an external loop simply representing our Bayesian inference about this unknown value of $\theta$, by rejection sampling, based on some subjective prior $p(\theta)$.

Personally, I consider this second model less convincing than the first one: we do see in practice quite some variation among closely related groups, so, we should invoke some distribution $\phi$ representing this variation.

Of course, ultimately, none of the two models is really convincing: as soon as you acknowledge the existence of variation in diversification rates between clades, then you should also expect variation within clades: if diversification rates differ between rodents and primates, then, they almost certainly vary among primates, or among rodents. Mathematically, $\theta$ should itself be the result of a stochastic process along the lineages. In fact, some interesting work has already been done along those lines, using a compound Poisson process (Rabosky, 2014).


Rabosky, D. L. (2014). Automatic detection of key innovations, rate shifts, and diversity-dependence on phylogenetic trees. PLoS One, 9(2), e89543. doi:10.1371/journal.pone.0089543

Monday, November 10, 2014

Bayesian priors for diversification studies

In my last post, I was asking whether one should condition the likelihood on survival in the context of diversification studies. The whole discussion, however, was purely at the level of the likelihood. Now, how should we look at this problem from a Bayesian perspective? 

Assuming that $\theta = (\lambda, \mu)$ is our vector representing the speciation and extinction rates, I was suggesting the following algorithmic model:

Nature has fixed distribution $\phi(\theta)$
   Nature chooses $\theta$ from $\phi$
   Nature runs diversification process given $\theta$
until process survives

where $\phi$ is the natural variation in diversification rate among clades (both extant and extinct). Our observed phylogeny can then be seen as one particular draw from this simulation model. Conditioning on the observed phylogeny $T$ leads to the following posterior distribution:

(1)    $p(\theta \mid T) \propto \phi(\theta) \, p(T \mid \theta) $

involving the likelihood unconditional on ultimate survival. I was then considering two limiting cases of this equation in the context of a maximum a posteriori approach.

Now, simply identifying $\phi$ with our prior and sampling from the posterior distribution defined by equation 1 would represent a meaningful Bayesian formalization of the problem. However, this assumes that we can see our prior as a reasonable estimate of the true natural variation in diversification rates $\phi$  and not just as a representation of some loosely defined prior beliefs about possible values of $\theta$. So, in particular, this has to be an informative prior.

Do we have practical ways to do this? Yes, I think so. In fact, I can see two different approaches to this problem.

Fossil data  Suppose we are able to come up with an acceptable estimate of the distribution of diversification rates from the fossil record. Since $\phi$ in equation 1 is to be interpreted as the distribution of diversification rates across all clades, both extant and extinctwe can directly identify $\phi$ with our empirical estimate $\hat \phi$ obtained from fossils, which leads us to the following posterior distribution:

(2)    $p(\theta \mid T) \propto \hat \phi(\theta) \, p(T \mid \theta)$.

Neontological data  Alternatively, suppose that we already have estimates of $\theta$ across several other clades (e.g. several mammalian orders), obtained by fitting or conditioning diversification models on their respective phylogenies. We would like to use this information to derive the prior for our clade of interest (say, Carnivores). Our prior information, however, is now itself conditional on survival, so, we can not simply calculate the mean and the variance of the speciation and extinction rates across other clades and use it as a proxy for $\phi$, since $\phi$ is not conditional on extinction.

On the other hand, we can re-write equation 1 as follows:

$p(\theta \mid T) = \phi(\theta) p(T \mid \theta) = \phi(\theta) p(S \mid \theta) \, p(T \mid \theta) / p(S \mid \theta) = p(\theta \mid S) \, p(T \mid \theta, S)$.

where $p(S \mid \theta)$ is, as in the previous post, the probability of ultimate survival of the clade given $\theta$. So, here, we have two factors: a conditional prior:

$p(\theta \mid S) \propto \phi(\theta) p(S \mid \theta)$,

which can be represented by the following sampling model:

   choose $\theta$ from $\phi$
   run a diversification process under $\theta$
until process survives
return $\theta$

clearly suggesting that this is the prior distribution of values of $\theta$ across surviving groups. And a likelihood now conditional on survival:

$p(T \mid \theta, S) = p(T \mid \theta) / p(S \mid \theta)$,

which we have already seen. What all this means is that, instead of working with $\phi$ and with the unconditional likelihood, we can equivalently work with a prior conditioned on survival, but then we should combine it with the conditional likelihood. In practice, we can for instance calculate the mean and the variance of speciation and extinction rates estimated on other mammalian orders and, using a moment-matching argument, derive an empirical prior $\hat \psi(\theta)$, now conditional on survival. The posterior is then:

(3)    $p(\theta \mid T) \propto \hat \psi(\theta) \, p(T \mid \theta, S)$.

Note that this idea of combining the conditional likelihood with a conditional prior is similar to what is proposed in Stadler et al (2013).

Thus, it all depends on what prior information you have. If your prior information is obtained from fossil evidence, then this should be considered as unconditional prior, to be used in conjunction with an unconditional likelihood (equation 2). If, on the other hand, your prior information has been gathered from other neontological studies, then this prior information is conditional on survival, and as such it should be be combined with a conditional likelihood (equation 3).

No prior information  Now, what if we do not want to use fossil evidence, nor any information gained from other clades? We would like to say that we do not know anything about $\lambda$ and $\mu$ and, accordingly, use a non-informative prior. However, which likelihood should we combine with this uninformative prior? conditional or unconditional? 

This is a bit tricky. First, there would be much to say about uninformative priors in general, before dealing with the question of how to use them in the present case. Second, technically, the only Bayesian prior that we can meaningfully introduce here should express our uncertainty about $\phi$. So, we would end up with two levels in our model: $\theta$ sampled from $\phi$, itself sampled from some hyperprior. We could explicitly derive something along those lines, but this would be uselessly complicated for the present matter. Instead, we can pretend that we implicitly marginalize our complicated two-level model over the hyperprior, ending up with an effective prior directly on $\theta$.

At the end, I think it is just a question of deciding whether we want to express lack of information about $\theta$ among all clades or only among surviving clades. In the first case, we would combine our uninformative prior with the unconditional likelihood, in the second case, with the conditional likelihood*. Personally, I think I would prefer to express lack of information among all clades, for it is not true that I don't know anything about the diversification rate of an extant species group: I know in particular that $\lambda - \mu$ cannot be unreasonably low, otherwise, I would not have observed this clade.

In summary, in a Bayesian context, there seems to be some flexibility in which version of the likelihood we can use: in fact, it all depends on the meaning attached to the prior.

We can of course imagine more complicated settings: estimating $\phi$ by simultaneously considering several clades in the context of a hierarchical Bayesian model, constructing integrative models jointly considering fossil and molecular data, etc. All this can lead to relatively complicated arguments and calculations for deriving the exact prior to be used  but this is exactly where algorithmic representations can be useful.


Stadler, T., Kühnert, D., Bonhoeffer, S., & Drummond, A. J. (2013). Birth-death skyline plot reveals temporal changes of epidemic spread in HIV and hepatitis C virus (HCV). Proceedings of the National Academy of Sciences, 110(1), 228–233. doi:10.1073/pnas.1207965110

* Strictly speaking, there is a third case: we could also entertain the model in which there is in fact no available variation under $\phi$, so that $\phi$ is highly concentrated around some value $\theta_0$ (as considered in my previous post). But we don't know the location of $\theta_0$ and therefore we put an uninformative prior on it. In that case, we would combine the uninformative prior with the conditional likelihood. However, personally, I am not convinced by this specific model: I tend to think that there is variation among clades, so that the clades that we observe are statistically enriched in higher diversification rates -- a fact that should be included in, and not factored out from, what we call our empirical data.

Thursday, November 6, 2014

Should we condition on non-extinction?

Diversification studies are concerned with the problem of testing models of species diversification and estimating their parameters (such as speciation and extinction rates) based on time-calibrated phylogenies. One particular question that regularly comes up in this context is whether and how we should take into account, in the inference procedure, the fact that, by construction, we only consider non-extinct groups.

A good sign that this question is not so obvious is that many different solutions seem to be considered in practice: likelihood functions are sometimes conditional on non-extinction (Stadler et al, 2013), sometimes conditional on the age of the last common ancestor of the group (Nee et al, 1994), sometimes not conditioned at all (Maddison et al, 2007, FitzJohn, 2010), or even conditioned on the number of species sampled today (in the context of molecular dating, Rannala and Yang, 1996). So, clearly, the question requires some clarification.

As a simple illustrating case, let us assume a diversification process with constant but unknown rates of speciation ($\lambda$) and extinction ($\mu$), and let us call $\theta = (\lambda, \mu)$ our parameter vector. Technically, we also have a third parameter, $t_0$, the time of origin of the process, but let us leave this detail aside for the moment (it does not really change the main argument).

A first attempt at deriving a likelihood for this problem is simply to consider the probability of producing the observed tree $T$, conditional on the parameter $\theta$:

(1)    $p(T \mid \theta)$.

One problem, however, is that there is a positive probability that the process gets extinct and therefore returns an empty tree:

$p(T = \emptyset \mid \theta) = 1 - p(S \mid \theta) > 0$,

where $p(S \mid \theta)$ is the probability of ultimate survival of the group, given the parameters. Yet, by the very design of the experiment, the only cases that we consider in practice are non-empty trees, and we would like our likelihood to sum to one over all possible observable configurations for the data — thus, over all possible non-empty trees.

We can avoid this ‘missing mass’ problem and restore a correct normalization of the probability by just conditioning on ultimate survival:

(2)    $p(T \mid \theta, S) = \frac {p(T \mid \theta)} {p(S \mid \theta)}$.

We could then take this conditional probability as our likelihood and proceed to estimation, for instance, by maximizing this conditional likelihood with respect to $\theta$.

One important thing to note here is that, since the unconditional likelihood (1) accounts for the probability of surviving, while the conditional likelihood (2) factors this effect out of the probability, then, everything else being equal, parameter configurations that tend to increase the chances of ultimate survival will be favored by the unconditional likelihood (1), but not by the conditional likelihood (2). In particular, the probability of survival is strongly dependent on the net expected growth rate of the species group $r = \lambda - \mu$, with larger values of $r$ (higher growth rates) implying a higher chance of ultimate survival. Thus, if you compare your estimates returned by each of the two versions of the likelihood, you will in general obtain a larger value for $r$ under (1) than under (2).

So, which likelihood should we use? At first sight, the normalization argument suggested above would seem to imply that we should of course use the conditional version (equation 2). But is that so simple?

I am not sure. Let us try to derive an algorithmic model of our problem. Fundamentally, what our conditional likelihood means, in algorithmic terms, is this:

Nature chooses a fixed parameter configuration $\theta = (\lambda,\mu)$
- Nature runs a diversification process with rates $\lambda$ and $\mu$
until process survives

First, note that the repeat loop here refers to a true, objective, repetition of the generating process: we use a conditional likelihood precisely because we consider that a typical instance of the experiment potentially involves several repeated attempts until a surviving species clade is produced. In practice, we can imagine that Nature has 'run' many clades in parallel, only some of which did not suffer from premature extinction — and the one we are considering is randomly chosen from this surviving subset. But since we assume that all the runs are independent, it is mathematically equivalent to imagine, as suggested by the 'repeat' loop above, that Nature runs the process repeatedly until the first surviving clade is produced — and we can then identify this first successful draw with our observation.

But then, it seems that we are assuming that Nature has repeatedly, stubbornly, tried under the same parameter configuration, until succeeding. However, I am not sure that this is really a good model of what actually happens.

Imagine for instance that we are more specifically interested in one particular order of mammals; say, we want to estimate the speciation and extinction rates of Carnivores. Carnivores are just one among ~25 mammalian orders. There is no doubt that there is quite some variation in diversification rates among mammalian orders: think about rodents versus primates. But then, if there is variation in diversification rates among surviving mammalian clades, there has surely been variation, more generally, among all ‘replicates’ that Nature has run in parallel, surviving or not.

So, perhaps our model should instead be something like the following:

Nature chooses a fixed distribution $\phi$ over the parameter $\theta = (\lambda, \mu)$
- Nature chooses a random $\theta = (\lambda,\mu)$ from $\phi$
- Nature runs a diversification process with rates $\lambda$ and $\mu$
until process survives

Importantly, $\phi$ is not really a Bayesian prior. Instead, it is meant to be a representation of an objective property of the process: the true variation among clades, both extant and extinct.

Notice also that this new model introduces potential selection effects on the parameter configurations that you end up observing: those clades that survive are statistically enriched in values of $\theta$ that are less likely to lead to premature extinction — in particular, enriched in values of $\lambda$ and $\mu$ such that $r = \lambda - \mu$ is larger.

If we now try to write down the probability represented by this model, we first note that $\lambda$ and $\mu$ are now intermediate variables in the algorithm, thus they should be integrated out. In addition, we have a repeat-until loop, therefore, we condition on the exit clause -- which is also averaged over $\lambda$ and $\mu$. Altogether, the model leads to the following probability of producing our tree:

(3)    $p (T \mid \phi, S) = \frac{p(T \mid \phi)}{p(S \mid \phi)} = \frac{\int p(T \mid \theta) \phi(\theta) d \theta}{\int p(S \mid \theta) \phi(\theta) d \theta}$.

This probability is still normalized: it sums to one over all possible non-empty trees, although now, for fixed $\phi$. Note that this probability is not anymore a function of $\lambda$ and $\mu$, which have been integrated away, since they are now random effects. In other words, under this model, there is just no meaningful likelihood that would be a function of our parameters of interest.

If we knew $\phi$, we could still obtain a point estimate of $\theta= (\lambda, \mu)$, by taking the maximum a posteriori (MAP): that is, by calculating the value of $\theta$ maximizing the integrand of the numerator of equation 3:

(4)    $p(T \mid \theta) \phi(\theta)$.

But we generally do not know $\phi$, and without any further assumption, we cannot hope to estimate this distribution just based on one single tree, however non-empty. Still, we can at least consider the following two limiting cases:

At one extreme, $\phi$ might be highly peaked around some unknown value $\theta_0 = (\lambda_0, \mu_0)$. In the limit of a very sharp peak, $\phi$ is nearly a point mass (a ‘Dirac’) at $\theta_0$, and equation (3) then reduces to:

$p(T \mid \theta_0, S) = \frac{p(T \mid \theta_0)}{p(S \mid \theta_0)}$,

which we can maximize w.r.t. $\theta_0$.

This probability is identical to our initial conditional likelihood (equation 2) — which makes sense, since, in that regime, the process is indeed repeatedly running under a (nearly) fixed parameter configuration $\theta_0$, until producing a non-empty tree. This likelihood does not include any selection bias in favor of groups with higher growth rates — which also makes sense, since there is no meaningful variation on which this selection bias could act.

At the other extreme, $\phi$ could be a broad distribution, such that $\phi(\theta)$ is locally constant in the vicinity of the true parameter value. In that case, our posterior probability (equation 4) becomes proportional to

$p(T \mid \theta)$,

and the MAP estimate therefore reduces to the estimate obtained by maximizing the unconditional likelihood. As already mentioned above, this unconditional likelihood includes a bonus for higher growth rates $r$ — which makes sense, since there is now sufficient variation among clades to indeed experience a species selection effect on $\theta$ when considering only surviving groups. This unconditional likelihood is not normalized over observable data, but this is not a problem: this is just because, in fact, this is not really a likelihood in the present context — it is, up to a proportionality constant, a limiting case of a posterior probability.

So, what all this means is the following: if you use a conditional likelihood, this is because you believe that there is actual repetition of the stochastic process (there is a repeat-until loop in your algorithmic model of the process). Now, repetition opens the door to the possibility of variation in the value of the parameters across instances. If you also have good reasons to suspect that this variation is in fact substantial (if there is sufficient available variation in diversification rates among species clades, extant and extinct), then, apparently, you should not condition your likelihood on ultimate survival. You should not, because by doing so, you would factor out from your estimation the selection bias that is in fact induced on the diversification parameters by differential survival of the groups. Conversely, if you believe that variation in diversification rates is limiting, then you should use a conditional likelihood.

In practice, the difference between the two estimates is probably very small anyway. Nevertheless, regardless of the practical impact, it is interesting to contemplate the purely theoretical and philosophical implications of this problem. In particular, one can see here on a specific example that being a good Frequentist does not necessarily imply that you can always consider your parameter of interest as fixed. Sometimes, the design of the problem implies that you have to consider it instead as a random quantity. Or, to put it differently, insisting on using a conditional likelihood with a fixed parameter in the present case cannot be justified purely on the grounds of some preference for one statistical paradigm — it also seems to entail a specific hypothesis about the underlying objective process (no available variation in diversification rates among clades).

Conversely, the entire derivation done here is not, strictly speaking, Bayesian either: I did not invoke any subjective prior, nor any distribution that would not have an objective meaning in terms of the underlying macroevolutionary process. The MAP estimate derived above could in fact be considered as a frequentist MAP. But after all, there are other situations where Frequentists sometimes use MAP estimation: for instance, in population genetics, when you want to estimate the age of the last common ancestor of your sample using genetic sequence data (e.g. Thomson et al, 2000): there, you consider your genealogy, not as a fixed parameter, but as a random draw from Kingman’s coalescent. Therefore, you end up using a posterior distribution, and not a likelihood.


Fitzjohn, R. G. (2010). Quantitative traits and diversification. Systematic Biology, 59(6), 619–633. doi:10.1093/sysbio/syq053

Maddison, W., Midford, P., & Otto, S. P. (2007). Estimating a binary character's effect on speciation and extinction. Syst Biol, 56(5), 701–710. doi:10.1080/10635150701607033

Nee, S., May, R. M., & Harvey, P. H. (1994). The Reconstructed Evolutionary Process. Philosophical Transactions of the Royal Society B: Biological Sciences, 344(1309), 305–311. doi:10.1098/rstb.1994.0068

Rannala, B., & Yang, Z. (1996). Probability distribution of molecular evolutionary trees: a new method of phylogenetic inference. Journal of Molecular Evolution, 43(3), 304–311.

Stadler, T., Kühnert, D., Bonhoeffer, S., & Drummond, A. J. (2013). Birth-death skyline plot reveals temporal changes of epidemic spread in HIV and hepatitis C virus (HCV). Proceedings of the National Academy of Sciences, 110(1), 228–233. doi:10.1073/pnas.1207965110

Thomson, R., Pritchard, J. K., Shen, P., Oefner, P. J., & Feldman, M. W. (2000). Recent common ancestry of human Y chromosomes: evidence from DNA sequence data. Proceedings of the National Academy of Sciences of the United States of America, 97(13), 7360–7365. doi:10.1073/pnas.97.13.6927

Friday, October 31, 2014

Algorithmic models of probabilistic inference

Rejection sampling is clearly not the most efficient Monte Carlo estimator. Even so, it is a conceptually beautiful idea: fundamentally, rejection sampling represents an algorithmic model of Bayes theorem.

The algorithm works as follows. Given some observed data $d$ and a model parameterized by $\theta$:
- randomly choose parameter $\theta$ from the prior 
- simulate data $D$, using $\theta$ as your parameter vector
- reject if $D \neq d$, otherwise, keep sample aside
Repeat until exhaustion.

Rejection sampling represents an operational definition of what we mean by conditioning a distribution on the data: it identifies the posterior distribution with what you get by filtering out, from the random draws from the prior distribution, those parameter configurations that did not reproduce what you observed. We can get a quantitative derivation of this posterior by just reading out our algorithm: the probability of getting a particular parameter configuration $\theta$ in our final sample is proportional to the probability of first drawing this configuration from the prior, $p(\theta)$, multiplied by the probability of accepting it  which is just the probability that we simulate a dataset exactly equal to $d$, $p(d \mid \theta)$. Therefore, my final sample is distributed according to:

$p(\theta \mid d) \propto p(\theta) p(d \mid \theta)$.

Thus, rejection sampling gives us a simple intuitive 'proof' of Bayes theorem.

Personally, I find the idea of rejection sampling not just useful for explaining Bayesian inference in the context of an introductory course. I also use it regularly, reformulating my most complex problems of the moment in terms of this algorithmic representation, just to clarify my ideas. As I will try to show through some examples in later posts, rejection sampling and, more generally, algorithmic models of probabilistic settings, can sometimes be very helpful for clarifying the meaning our models and sorting out the calculations.

But first, a few more preliminaries, in the form of three little examples of increasing complexity, illustrating how this works, and what this fundamentally means, in practice. Example 1 and 2 are rather trivial, but they are interesting in that they represent two different epistemological meanings that Bayes theorem can have in practical situations. They also set the stage for example 3, which deals with conditional likelihoods.

Example 1. Imagine we want to infer the allele frequencies at a neutral bi-allelic locus, based on a random sample of chromosomes from a given population. Say we have sampled $n=5$ chromosomes, of which $k=3$ had allele 1 and $n-k=2$ allele 0, and we want to infer $\pi$, the frequency of allele 1 in the population. Of course, we could just take $k/n$ as a quick estimate, but if $n$ is small, there will be quite some uncertainty, which we also want to quantify.

If we are dealing with a randomly mating species, with known scaled mutation rate $\theta = 4 N_e u$, we know that the frequency distribution of neutral alleles in the population is a Beta distribution of parameter $\theta$ (figure below, thin line). We can use this distribution as our prior on $\pi$ and use the rejection sampling algorithm:

- randomly choose $\pi$ from prior 
- simulate $n$ chromosomes, each with prob. $\pi$ of being of type 1.
- count number $K$ of chromosomes carrying allele 1.
- reject if $K \neq k$.

Or, more compactly:

- randomly choose $\pi$ from prior 
- simulate $K \sim$ Binomial($\pi,n)$
- reject if $K \neq k$.
Reading out this algorithm leads to the following posterior distribution (figure, thick line):

$p(\pi \mid k) \propto p(\pi) p(k \mid \pi)$.

This is a rather trivial case, although one for which the inferential semantics of the algorithm is particularly compelling: our locus of interest is just any neutral locus for which $k$ out of $n$ randomly chosen chromosomes happen to have allele 1. Therefore, simulating a large number $P$ of random instances of neutral loci, each with its own population frequency $\pi$, reproducing the data-acquistion protocol on each of them and then selecting only those instances that match our observation (conditioning step), gives us a typical set, of which our true observed instance is then supposed to be just an additional random member. In particular, if we could sort in ascending order the $P+1$ values of $\pi$ (the $P$ that we have simulated + the true one), then, since our unknown $\pi$ is indistinguishable from simulated replicates, it should have an equal chance of being the first, or the second, or more generally, of having rank $p$ for any $p=1..P+1$. Therefore, the unknown $\pi$ has equal chance of being contained within any one among the successive $P+1$ intervals of our simulated series (including the two open-ended intervals at both ends): this is the epistemological meaning of a posterior distribution. 

Example 2. As another slightly less trivial problem, suppose now that we have genotyped, not just one, but $M$, unlinked neutral bi-allelic loci (or sites) in our $n$ individuals. We now we want to infer $\theta = 4 N_e u$, based on the allele counts $k_i$, $i=1..M$, across loci. Let us call $k$ the vector of observations. We have prior information about $\theta$, summarized in some prior distribution $p(\theta)$. In that case, the algorithm would run as follows:

randomly choose $\theta$ from $p(\theta)$

for site i=1..M
- randomly choose $\pi_i$ from $p(\pi_i \mid \theta)$
- simulate $K_i \sim$ Binomial$(\pi_i, n)$

reject unless $K_i = k_i$ for all $i=1..M$.

Here, the $\pi_i$’s are now just temporary random variables. The algorithm can therefore be written more compactly as:

randomly choose $\theta$ from $p(\theta)$
for i=1..M
- randomly simulate $K_i \sim p(K_i \mid \theta) = \int p(K_i \mid \pi_i) p(\pi_i \mid \theta) d \pi_i$
reject unless $K_i = k_i$ for all $i=1..M$.

where we have averaged out over all possible ways we could have sampled the $\pi_i$'s. The 'for' loop leads to a product, which gives me the following posterior distribution:

$p(\theta \mid k) \propto p(\theta) \prod_i p(k_i \mid \theta)$.

Still fairly simple. One little detail, however, concerning the epistemological meaning of the whole procedure in this new case: thus far, the formulation of the algorithm has been slightly incomplete. When I say ‘choose’, perhaps I should be more specific about who exactly ‘chooses’ here. In the case of problem 1:

Nature chooses $\pi$ from the neutral allele frequency distribution $p(\pi \mid \theta)$ 
Nature simulates $K \sim p(K \mid \pi)$
I observe and reject if $K \neq k$.

The fact that random draws are entirely under Nature's control is precisely why inference by conditioning is so compelling in that case. But for problem (2), it is a bit different:

I randomly choose $\theta$ from my prior $p(\theta)$ and imagine it is the true $\theta$

for i=1..M
- Nature randomly chooses $\pi_i \sim p(\pi_i \mid \theta)$
- Nature simulates $K_i \sim p(K_i \mid \pi_i)$

reject unless $K_i = k_i$ for all $i=1..M$.

This (somewhat less compelling) application of rejection sampling is more specifically what we call Bayesian inference: happily mixing subjective priors ('I choose') with distributions that are meant to represent objective processes ('Nature chooses'). Epistemologically, what we are now doing is this: we are filtering out, from our prior beliefs about $\theta$, those possible values that don't easily reproduce the observed data — which still makes sense, although this assumes that we are comfortable with translating beliefs into probability distributions and vice versa. I won’t discuss here the philosophical implications of this methodological choice. I will in due time, but for the moment, I just wanted to be as accurate (and as honest...) as possible in my formulation. Making this distinction beween the subjective and the objective components of the model can be useful in less trivial contexts.

Examples 3. Now, a more complicated problem: I still want to estimate $\theta = 4 N_e u$ based on $M$ unlinked loci. But the crucial difference with the previous example is that I have removed all the constant sites of my dataset, so as to consider only segregating sites: that is, only loci that happen to show at least one variant in my sample of $n$ individuals. So, by design, for all $i=1..M$, one has $0 < k_i < n$ (whereas in the previous case, one had instead that $0 \le k_i \le n$). Let us denote this additional constraint by $S$: the sample is entirely made of segregating sites. For $i=1..M$, $S_i$ stands for the constraint that site $i$ is segregating.

I should really take into account this new mode of data acquisition in my inference. If I do not, I will greatly overestimate $\theta$ (since it would take very high values of $\theta$ to make it likely that $M$ randomly chosen neutral sites were all segregating). Thus, I clearly need to work out the probabilistic calculations for my constrained model. 

Since sites are unlinked and, therefore, independent, collecting $M$ segregating sites among a large collection contanining a majority of non-segregating sites is just like letting Nature ‘run’ the process, site by site, intercepting only those that turn out to be segregating, until obtaining a total number of $M$ accepted sites (and then I decide whether or not to reject the whole sample). Which leads to the following algorithmic representation:
I randomly choose $\theta$ from $p(\theta)$ and imagine it is the true $\theta$

for i=1..M
  - Nature randomly chooses $\pi_i \sim p(\pi_i \mid \theta)$
  - Nature randomly simulates $K_i \sim p(K_i \mid \pi_i)$
  until $1 < K_i < n$

I reject unless $K_i = k_i$ for all $i=1..M$.

Note that there are now two levels of rejection: one that is ‘for free’ (in the repeat-until loop), and one that is not (the final step). By ‘free’, I mean: not accounted for in our total acceptance rate -- and, therefore, not accounted for in our total probability.

Now, if we want to write down the posterior, as above, we can first eliminate the $\pi_i$'s:

Randomly choose $\theta$ from $p(\theta)$

for i=1..M
  - Nature randomly simulates $K_i \sim p(K_i \mid \theta) = \int p(K_i \mid \pi_i) p(\pi_i \mid \theta) d \pi_i$
  until $1 < K_i < n$

Reject unless $K_i = k_i$ for all $i=1..M$.

Then, for a given $\theta$ and a given $i$, carefully interpreting the repeat-until loop leads to the following proability for $k_i$, which is now conditioned on the constraint $S_i$ being fulfilled:

(1) $p(k_i \mid \theta, S_i) = p(k_i \mid \theta) / p(S_i \mid \theta)$.

This conditional likelihood is defined only over $k_i = 1..n-1$ (now excluding 0 and n, since I won't exit the repeat-until loop as long as these values are being produced) and has been suitably normalized, by dividing it by the probability of getting a value for $k_i$ in the allowed interval (i.e. by indeed getting a segregating site given $\theta$):

$p(S_i \mid \theta) = \sum_{l=1}^{n-1} p(k_i \mid \theta)$.

Then, according to the algorithm, I just need to take the product over the $M$ sites (the 'for' loop) and combine this with my prior:

$p(\theta \mid k) \propto p(\theta) \prod_i p(k_i \mid \theta, S_i)$,

or, for short:

(2)    $p(\theta \mid k) \propto p(\theta) p(k \mid \theta, S)$.

This particular problem illustrates one general rule:

filtering the data $\implies$ repeat-until loops $\implies$ conditional likelihoods.

However, it is also interesting for another reason: there is no simple and safe way we can have $S$ appear at the correct place, in equation 2, by just blindly manipulating symbolic equations, as we often do in applied Bayesian inference, shuffling symbols around, sometimes to the left, sometimes to the right of the conditioning sign. We could try to write, for instance:

$p(\theta \mid k, S) \propto p(\theta \mid S) p(k \mid \theta, S)$,

but this looks different from equation 2. We could then be tempted to expand the first factor:

$p(\theta \mid S) \propto p(\theta) p(S \mid \theta)$

and combine it with the expanded form of the conditional likelihood (equation 1), leading to:

$p(\theta \mid k, S) \propto p(\theta) p(S \mid \theta) p(k \mid \theta) / p(S \mid \theta) \propto p(\theta) p(k \mid \theta)$,

but then, we are back to the simple posterior of problem 2, which is obviously wrong. One main problem here is that we have an ambiguity in the meaning of $p(\theta \mid k, S)$: usually, this would mean ‘the probability of $\theta$, given the observed data $k$ and given that $S$ is true’. However, what we really want in the present case is: ‘the probability of $\theta$, given the observed data $k$ and given that my data acquisition protocol is constrained to deliver something compatible with $S$’, which is different. Symbolic notation in terms of '$p(A \mid B)$' is just too sloppy. 

We could try to come up with better notations and still play the game of automatic symbolic calculus. However, when facing complex problems, perhaps it is just more helpful, and more insightful, to directly rely on a semantic model of our probabilities. Rejection sampling, and more generally algorithmic models of our sampling protocol, provide such a semantic model. As a result, they represent a relatively foolproof method for deriving the correct version of our posterior distribution (or of our likelihood): we just need to spell out each of the probabilities, such as operationally defined by the algorithm, which gives us the solution.

P.S. (Nov 1st):  In hindsight, I realize that there is something fallacious in this final argument. You can have a good semantic model and good notations. Here, I should perhaps be more careful and denote the probability that $A$ is true given that $B$ is true and given that $C$ will always be fulfilled owing to the data acquisition protocol, by $p(A \mid B;C)$. A correct form for Bayes theorem in the present case would then be:

$p(\theta \mid k; S) \propto p(\theta ; S) p(k \mid \theta; S)$

Since the data acquisition protocol does not influence the prior:

$p(\theta; S) = p(\theta)$

As for our likelihood, under this protocol, as suggested by the algorithmic argument above, it is equal to:

(3)    $p(k \mid \theta; S) = p(k \mid \theta, S) = p(k \mid \theta) / p(S \mid \theta) $

and therefore, equation 2 should be more properly written as:

(2b)    $p(\theta \mid k; S) \propto p(\theta) p(k \mid \theta, S)$.

Tuesday, October 28, 2014

Bayesian automatic control of model complexity

Still concerning Bayes factors and model selection, I wanted to show a little example, illustrating how one can rely on Bayesian model averaging and hierarchical priors to implement efficient and automatic control of model complexity -- model auto-tuning, as I like to call it.

First, I have simulated a single-gene amino-acid alignment of ~600 positions under a very simple model, using the LG amino-acid empirical matrix (Le and Gascuel, 2008) and assuming no variation in substitution rates among sites. For this simulation, I used as a template an alignment of elongation factor 2 in 30 species across eukaryotes. The simulation was done using the tree estimated on this dataset, thus under empirically reasonable settings.

Then, I have analyzed this simulated dataset directly using one of the most complex models I have implemented thus far: CAT-GTR + Gamma (Lartillot and Philippe, 2004). This model implements variation among sites in the rate of substitution (distributed across sites according to a discretized gamma distribution, with 4 categories), but also in the equilibrium frequency profile over the 20 amino-acids, using a Dirichlet process non-parametric prior. Finally, the 190 relative exchangeability parameters are also considered as unknown and are therefore estimated directly on the dataset.

According to accepted wisdom, one would think that this model is way too complex, way too rich, to be applied to such a small dataset. Even using a GTR model, instead of a LG model, in the present case would often be considered as excessive (190 free parameters! on a 600-site alignment!). Therefore, my estimation will certainly suffer from excessive variance.

Is this true?

Let us see, by just trying both models: the complex model, CAT-GTR + Gamma, and the true model, LG without Gamma, and see what comes up in terms of phylogenetic estimation (I let you guess which picture corresponds to which model):

Where is the excess variance here? There is virtually no detectable difference between the two estimates.

This represents a typical example showing that a complex model can automatically adapt to the specific case you are dealing with, making model selection totally dispensable. All this, however, is true only if your priors have been correctly designed, so as to get efficient shrinkage of your parameters, in particular under those circumstances where the data in fact contain none of the subtle variation that your model is meant to account for. So, perhaps I should spend some time explaining how my priors have been elicited in the present case. The main point is that these priors are hierarchical.

For the shape parameter $\alpha$ of the gamma distribution of rates across sites, I have used a log-uniform prior restricted between 0.1 and 100, thus allowing a broad range of variance among sites in relative rates. Large values of $\alpha$ effectively implement uniform rates across sites.

A Dirichlet process prior is used over the unknown distribution of amino-acid equilibrium frequency profiles across sites. This non-parametric prior effectively runs over infinite mixtures of amino-acid profiles. There are two main things that have to be controlled in the distribution of the components of the mixture: the relative weights of the components and the dispersion of the profiles attached to them.

The weights are regulated by a granularity parameter $\kappa$. For small values of $\kappa$, the mixture is such that one component of the mixture has an overwhelmingly large weight compared to the infinite series of all other components. In this regime, nearly all sites will be allocated to the large-weight component, and the model will implement strictly identical amino-acid profiles across sites. In contrast, for large values of $\kappa$, the Dirichlet process implements many components of very small weights, so that each site has effectively its own equilibrium frequency profile over amino-acids. Here, $\kappa$ is endowed with an exponential prior of mean 10, so that there is some room for implementing potentially rich mixtures by setting $\kappa$ to reasonably large values, but also to reduce the number of components to 1 or 2 (essentially, when $\kappa$ is less than 0.1).

As for the distribution of profiles across components, it is regulated by a concentration parameter, $\delta$. For large values of $\delta$, amino-acid profiles implemented by the mixture will all be very similar, whereas for small value of $\delta$, they will tend to be very different. Here, $\delta$ is endowed with an exponential prior of mean 20. So, it does not especially favor very similar components, but it can, if needed (when its value is typically of the order of 50 or above).

Finally, the prior on the exchangeability parameters is centered around the LG exchangeabilities, with a concentration hyperparameters $\zeta$ itself endowed with a log-uniform distribution between 0.1 and 100. Large values of $\zeta$ effectively shrink the exchangeabilities very close to those of LG.

Therefore, the most important priors and distributions are tuned by four key-parameters, such that, with large $\alpha$, large $\zeta$, small $\kappa$ and/or large $\delta$, the model reduces to the LG model with uniform rates and profiles across sites. Note that there are two redundant ways the model can come back to virtually identical profiles across sites: either by implementing one single component of overwhelming weight (small $\kappa$), or by just letting the amino-acid profiles of the mixture be all very close to each other (large $\delta$).

So, what happens exactly when I run this model on the dataset simulated under the simple LG-Uniform model?

First, the posterior distribution on $\alpha$, the shape parameter of the distribution of rates across sites (left) is shifted to very large values: $\alpha$ is inferred to be so large that, essentially, all relative rates of substitution across sites are all strongly shrunken around 1 (right) -- in other words, the model effectively implements uniform rates across sites, as should be done in the present case.

Then, what about variation in amino-acid profiles among sites? The figure below shows the posterior distribution on the number of occupied components:

as you can see, we get a large posterior probability for the configuration where all sites are allocated to one single component (pp = 0.6). Even when sites are distributed over more than one component (which still happens around 40% of the time), the components effectively implement nearly identical frequency profiles: $\delta$ is on average around 80 (not shown). Thus, efficient shrinkage of equilibrium frequency profiles across sites is achieved by a combination of discrete model selection (by playing on the number of occupied profiles) and continuous model averaging (by shrinking component-specific frequency profiles toward their mean).

We can have a more direct look at the average profiles estimated across sites, and check that they are indeed, for all practical means, constant across sites: 

It is also interesting to get a look at the accuracy of our estimation of the estimated relative exchangeabilities, by comparing them to the true values:

Such a beautifully accurate estimation of the relative exchangeabilities may in fact suggest that using a model centered on LG is just too easy... So, what happens if I use instead a uniform prior over exchangeabilities? In that case, shrinkage is somewhat less efficient, both in terms of the accuracy of the estimated exchangeabilities:

and in terms of the number of occupied components in the mixture (the posterior probability of getting just one occupied component is now less than 0.2):

On the other hand, $\delta$ is still large (again of the order of 80), which probably contributes to inducing a series of very similar amino-acid profiles across sites:

Finally, the estimated consensus tree is exactly the same as with the model centered on LG:

Thus, globally, there is no real loss incurred by using the more complex model, when in fact a much simpler model would be applicable to our data.

In contrast, on the real sequence alignment, there is rate variation among sites, and there is also a great deal of variation in amino-acid preferences along the sequence. We can get a clear picture of that by just running the CAT-GTR + Gamma model directly on the original empirical dataset of elongation factor sequences. Here, the posterior distributions that we get are totally different, in particular on the distribution of rates across sites ($\alpha$ is now clearly less than 1, of the order of 0.55) and concerning the distribution of amino-acid profiles across sites:

Other analyses with CAT-GTR on a now relatively large set of cases suggest that this is a very general pattern: on empirical sequence data, one always sees substantial variation among sites of both rates and amino-acid propensities.

Overall, one should probably not over-interpret those results, which all entirely rely on the fact that the true model is included in the set of possible configurations explored by CAT-GTR. In the real world, things are much more complicated: you almost certainly have other types of model violations (compositional heterogeneity among taxa, epistatic interactions between positions, due to the tertiary structure of the protein), which have been totally ignored in our analysis. One big question is, for instance: how does a model such as CAT-GTR react to these violations, compared to LG or to other models? Not an easy question, for sure.

However, this is not the kind of issues that you can hope to address by just comparing the empirical fit of the models. Addressing such problems requires more substantive arguments, about exactly what you want to estimate and how violations are likely to influence your estimation. In this direction, we may not know the specific impact of other model violations on CAT-GTR, but we already know that, by using LG, we are for sure creating one additional model violation, since we are ignoring what appears to be substantial and ubiquitous variation among sites in amino-acid propensities -- and this does have an impact on the accuracy of phylogenetic estimation (Lartillot et al, 2007).

But in any case, these model violation problems distract us from the main point of this post, which is more theoretical, being concerned with how Bayesian inference deals with model complexity. And the main message is this: unlike what happens in the context of maximum likelihood estimation, where we have to compensate for the intrinsic advantage of more complex models by implementing explicit model penalization and explicit model selection, in a Bayesian inference context, and using correctly specified priors, model complexity is just under automatic control. As a result: (1) it is possible to use a complex model by default, even on small datasets; this model will automatically reduce to the simpler model if needed (2) therefore, we do not need Bayes factors to be sure that we are using the right model, with the right complexity; (3) nor do we not need Bayes factors to see that, on a real dataset, there is variation in rate and in equilibrium frequency profiles: we just see it in the posterior distribution. And if we are really anxious, we can always check, on simulations under simpler models, that our Bayesian analysis would not infer such a variation if there weren't any.


Le, S. Q., & Gascuel, O. (2008). An improved general amino acid replacement matrix. Molecular Biology and Evolution, 25(7), 1307–1320. doi:10.1093/molbev/msn067

Lartillot, N., & Philippe, H. (2004). A Bayesian mixture model for across-site heterogeneities in the amino-acid replacement process. Molecular Biology and Evolution, 21(6), 1095–1109. doi:10.1093/molbev/msh112

Lartillot, N., Brinkmann, H., & Philippe, H. (2007). Suppression of long-branch attraction artefacts in the animal phylogeny using a site-heterogeneous model. BMC Evolutionary Biology, 7 Suppl 1, S4. doi:10.1186/1471-2148-7-S1-S4