Skip to Content
Support Palestine 🍉  ⬩  Know what's happening

Lab 07: Markov Chains and Stationary Distributions

This practical explores the fundamentals of Markov chains, emphasizing the Markov property and stationary distributions. These concepts are foundational in probability theory and are widely applied in fields such as machine learning, natural language processing, and queueing theory.

Introduction

A Markov chain is a stochastic process that undergoes transitions from one state to another based on a set of probabilities. It satisfies the Markov property, meaning the next state depends only on the current state, not the sequence of previous states.

Key Concepts:

  • Markov Property:

    P(Xn+1=xXn=xn,Xn1=xn1,,X0=x0)=P(Xn+1=xXn=xn)P(X_{n+1} = x | X_n = x_n, X_{n-1} = x_{n-1}, \dots, X_0 = x_0) = P(X_{n+1} = x | X_n = x_n)
  • Stationary Distribution:
    A distribution π\pi is stationary if:

    πP=π\pi P = \pi

    where PP is the transition matrix.

These properties are essential in understanding equilibrium behavior in random processes.

References

Markov Chain Simulation in Python

1. Define the Transition Matrix

import numpy as np # Example: 3-state Markov chain P = np.array([ [0.1, 0.6, 0.3], [0.4, 0.4, 0.2], [0.3, 0.3, 0.4] ])

2. Simulate the Markov Chain

def simulate_markov_chain(P, initial_state, steps): states = [initial_state] for _ in range(steps): current_state = states[-1] next_state = np.random.choice(len(P), p=P[current_state]) states.append(next_state) return states

3. Demonstrate the Markov Property

# Check that the next state only depends on current state np.random.seed(0) sequence = simulate_markov_chain(P, initial_state=0, steps=10) print("Generated Sequence:", sequence)

4. Estimate Stationary Distribution

def estimate_stationary_distribution(P, steps=100000, burn_in=1000): sequence = simulate_markov_chain(P, initial_state=0, steps=steps) counts = np.bincount(sequence[burn_in:], minlength=len(P)) return counts / np.sum(counts)

Example Usage

Click to expand

# Simulate and estimate stationary distribution pi_est = estimate_stationary_distribution(P) print("Estimated Stationary Distribution:", pi_est)

Visualization

Click to expand

import matplotlib.pyplot as plt def plot_state_frequencies(sequence, num_states): counts = [sequence.count(i) for i in range(num_states)] plt.bar(range(num_states), counts, color='skyblue') plt.xlabel('State') plt.ylabel('Frequency') plt.title('State Visit Frequency') plt.xticks(range(num_states)) plt.show() plot_state_frequencies(sequence, num_states=3)

Time and Space Complexity

OperationTime Complexity
Markov Chain SimulationO(n)O(n)
Estimating Stationary DistributionO(n)O(n)
VisualizationO(k)O(k)
  • nn: number of steps in the chain
  • kk: number of distinct states

Applications

  1. Web Page Ranking: Google’s PageRank is based on stationary distributions.
  2. Natural Language Processing: Hidden Markov Models for part-of-speech tagging.
  3. Physics: Modeling systems in thermodynamic equilibrium.
  4. Finance: Stock price modeling and risk assessment.

Conclusion

Through this practical, we demonstrated:

  • The Markov property: future state depends only on present.
  • Stationary behavior: Markov chains settle into a distribution over time.

Understanding these concepts is vital in analyzing real-world random systems that evolve over time.

References

  1. Grimmett, G., & Stirzaker, D.
    Probability and Random Processes, Oxford University Press.

  2. CS229: Machine Learning Notes – Markov Chains & HMMs



💡

Some of my latest posts on LinkedIn

Last updated on