Skip to content

Analyzing a DTMC with Octave

GNU Octave

A high-level interpreted language, primarily intended for numerical computations

Basic Usage

Library pkg load queueing

Octave allows to compute stationary or transient state occupancy probabilities for a discrete-time Markov chain.

  • p = dtmc (P) calculates the stationary one.
  • p = dtmc (P, n, p0) calculates the transient occupancy probability given n time steps and the initial state probability vector p0.

Exercises

Example 1

P = zeros(9,9);
P(1,[2 4] ) = 1/2;
P(2,[1 5 3] ) = 1/3;
P(3,[2 6] ) = 1/2;
P(4,[1 5 7] ) = 1/3;
P(5,[2 4 6 8]) = 1/4;
P(6,[3 5 9] ) = 1/3;
P(7,[4 8] ) = 1/2;
P(8,[7 5 9] ) = 1/3;
P(9,[6 8] ) = 1/2;
p = dtmc(P);
disp(p)

Output:

0.083333   0.125000   0.083333   0.125000   0.166667   0.125000   0.083333   0.125000   0.083333

The output is stationary probabilities.

Example 2

P = zeros(9,9);
P(1,[2 4] ) = 1/2;
P(2,[1 5 3] ) = 1/3;
P(3,[2 6] ) = 1/2;
P(4,[1 5 7] ) = 1/3;
P(5,[2 4 6 8]) = 1/4;
P(6,[3 5 9] ) = 1/3;
P(7,[4 8] ) = 1/2;
P(8,[7 5 9] ) = 1/3;
P(9,[6 8] ) = 1/2;
p0 = [1, 0, 0, 0, 0, 0, 0, 0, 0];
p = dtmc(P,8,p0);
disp(p)

Output:

0.1728        0   0.1667        0   0.3333        0   0.1667        0   0.1605

Output is the probability vector for state occupancy starting from the first state, after 8 transition.

Example 3

a = 0.2;
b = 0.15;
P = [ 1-a a; b 1-b];
T = 0:14;
pp = zeros(2,length(T));
for i=1:length(T)
pp(:,i) = dtmc(P,T(i),[1 0]);
endfor
ss = dtmc(P); # compute steady state probabilities
plot( T, pp(1,:), "b+;p 0(t);", "linewidth", 2, ...
T, ss(1)*ones(size(T)), "b;Steady State;", ...
T, pp(2,:), "r+;p 1(t);", "linewidth", 2, ...
T, ss(2)*ones(size(T)), "r;Steady State;");
xlabel("Time Step");

This script outputs a plot, showing that the probability of the state occupancy from time step 1 to 14, and also the stationary one.

From the plot we can find the the probability of state occupancy converges to the state one with the increasing time steps.

Some Literature

EU H2020 Project

High-Performance Real-Time Architectures for Low-Power Embedded Systems

Dynamically update DTMC model parameters

Goal: to build a DTMC model and update its parameters at run-time

The paper Model evolution by run-time parameter adaptation

Learning a Markov Chain from Source Code

Goal: derive a Markov chain directly from a program.

The paper Statistical Learning of Markov Chains of Programs

Probabilistic Guarantees on Architectural Components

Goal: evaluate structural and behavioral alternatives by means of DTMC models since they provide probabilistic guarantees.

The paper HaiQ: Synthesis of Software Design Spaces with Structural and Probabilistic Guarantees

......