Stationary Distribution of Irreducible Continuous Time Markov Chain
Students of linear algebra may note that the equation looks very similar to the column vector equation for eigenvalues and eigenvectors, with . In fact, by transposing the matrices, In other words, the transposed transition matrix has eigenvectors with eigenvalue that are stationary distributions expressed as column vectors. Therefore, if the eigenvectors of are known, then so are the stationary distributions of the Markov chain with transition matrix . In short, the stationary distribution is a left eigenvector (as opposed to the usual right eigenvectors) of the transition matrix.
When there are multiple eigenvectors associated to an eigenvalue of 1, each such eigenvector gives rise to an associated stationary distribution. However, this can only occur when the Markov chain is reducible, i.e. has multiple communicating classes.
In genetics, one method for identifying dominant traits is to pair a specimen with a known hybrid. Their offspring is once again paired with a known hybrid, and so on. In this way, the probability of a particular offspring being purely dominant, purely recessive, or hybrid for the trait is given by the table below.
States Child Dominant Child Hybrid Child Recessive Parent Dominant Parent Hybrid Parent Recessive What is a stationary distribution for this Markov chain?
The transition matrix is The transpose of this matrix has eigenvalues satisfying the equation It follows that . So the eigenvalues are , , and . The eigenvalue gives rise to the eigenvector , the eigenvalue gives rise to the eigenvector , and the eigenvalue gives rise to the eigenvector . The only possible candidate for a stationary distribution is the final eigenvector, as all others include negative values.
Then, the stationary distribution must be .
Find a stationary distribution for the 2-state Markov chain with stationary transition probabilities given by the following graph:
The limiting distribution of a Markov chain seeks to describe how the process behaves a long time after . For it to exist, the following limit must exist for any states and : Furthermore, for any state , the following sum must be : This ensures that the numbers obtained do, in fact, constitute a probability distribution. Provided these two conditions are met, then the limiting distribution of a Markov chain with is the probability distribution given by .
For any time-homogeneous Markov chain that is aperiodic and irreducible, converges to a matrix with all rows identical and equal to . Not all stationary distributions arise this way, however. Some stationary distributions (for instance, certain periodic ones) only satisfy the weaker condition that the average number of times the process is in state in the first steps approaches the corresponding value of the stationary distribution. That is, if is the stationary distribution, then
Not all stationary distributions are limiting distributions.
Consider the two-state Markov chain with transition matrix As increases, there is no limiting behavior to . In fact, the expression simply alternates between evaluating to and , the identity matrix. However, the system has stationary distribution , since So, not all stationary distributions are limiting distributions. Sometimes no limiting distribution exists!
For time-homogeneous Markov chains, any limiting distribution is a stationary distribution.
Let the Markov chain have transition matrix . Then, suppose That is, the limit is an matrix with all rows equal to . Then note that Inspecting one row of the left matrix being multiplied on the right-hand side, it becomes clear that . Thus, the limiting distribution is also a stationary distribution.
- Matrices
- Eigenvalues and Eigenvectors
- Markov Chains
- Ergodic Markov Chains
Source: https://brilliant.org/wiki/stationary-distributions/
0 Response to "Stationary Distribution of Irreducible Continuous Time Markov Chain"
Post a Comment