Generate a Markov chain in Python using an object's attribute as state

Question:

Suppose I have several different states in the form of an Enum:

class State(Enum):
    State1 = 1
    State2 = 2
    State3 = 3

I define transition probabilities between the states:

transition_probabilities = [
    [0.8, 0.1, 0.1],
    [0.2, 0.5, 0.3],
    [0.3, 0.3, 0.4]
]

I know have a selection of different objects in the form of dataclasses that have as an attribute a particular state:

@dataclass
class Thing:
    name: str
    state: State

things = [
    Thing('a', State.State1),
    Thing('b', State.State1),
    Thing('c', State.State2),
    Thing('d', State.State2),
    Thing('e', State.State2),
    Thing('f', State.State3),
]

What I’d like to do is generate a sequence of objects using something like random.choices where the probability of sampling the next object depends on the state of the current object, eg if my first object is Thing('a', State.State1), then I have an 80% chance of sampling from [Thing('a', State.State1), Thing('b', State.State1)], a 10% chance of sampling from the State2 things and a 10% chance of sampling from the State3 things, and similarly with the other transition probabilities.

Edit: To clarify, if I start with an object in state 1 then I want to transition to a new state with the defined probabilities and then uniformly sample a single object from the objects that have the state that I transitioned to. I want to repeat this until I’ve generated a sequence of a set length (eg 20)

I can’t think of a good way to do this using this current set up – is it possible and if not how should I set it up so that it is?

Thank you!

Asked By: Dan Jackson

||

Answers:

Here is my solution that generates a random path (you can think of it as a sort of random walk):

import numpy as np

from enum import Enum
from dataclasses import dataclass

# NOTE: values adjusted to use States as indices
class State(Enum):
    State1 = 0
    State2 = 1
    State3 = 2

@dataclass
class Thing:
    name: str
    state: State

transition_probabilities = [
    [0.8, 0.1, 0.1],
    [0.2, 0.5, 0.3],
    [0.3, 0.3, 0.4]
]

things = [
    Thing('a', State.State1),
    Thing('b', State.State1),
    Thing('c', State.State2),
    Thing('d', State.State2),
    Thing('e', State.State2),
    Thing('f', State.State3),
]

# group objects by same state
groups = dict()  # dict: state -> Thing

for obj in things:
    if obj.state not in groups:
        groups[obj.state] = []

    groups[obj.state].append(obj)

indices = np.arange(len(things))
all_states = list(groups.keys())

# generate a random path
def generate_path(max_length=20):
    # start from a random object (and so state); add it to the path
    current_obj = things[np.random.choice(indices)]
    path = [current_obj]

    # you can add a stopping criterion (e.g. terminal states)
    for _ in range(max_length):
        state = current_obj.state
        probs = transition_probabilities[state.value]

        # sample next state according to transition probabilities
        new_state, = np.random.choice(all_states, size=1, p=probs)

        # randomly sample an object in that state
        new_object, = np.random.choice(groups[new_state], size=1)
        current_obj = new_object

        # update the path
        path.append(current_obj)

    return path

An example usage:

random_path = generate_path()

for obj in random_path:
    print(obj.name, obj.state)

That may print:

f State.State3
c State.State2
...
e State.State2
c State.State2

The function generate_path() outputs a path with max_length + 1 objects. You can adapt this code according to your own application. Hope it helps.

Answered By: Luca Anzalone
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.