Take the grid world from https://colab.research.google.com/drive/1RuiBeeaq4jAZkx7Rh4VDlWRfBMK9Mfkg?usp=sharing
(or another domain if you have something you really prefer).
Add some amount of probabilistic behavior and reward to this environment and model it as a Markov Decision Problem (MDP). See https://colab.research.google.com/drive/157MhpeFiKZPFU5ao4bh5oBQUhCF2dTtu?usp=sharing
for an example of a betting game modeled as an MDP.
For example : maybe the environment is slippery, and actions sometimes don’t have the desired effects. Maybe some squares give negative reward some percentage of the time (traps?). Maybe all squares give negative reward some percentage of the time (meteorite?). Maybe some walls are electrified? Etc.
Write down how this would be modeled as an MDP:
States
Actions in each state
Transition function, i.e. probability that an action in a state will produce a given successor state
Reward function, i.e., which transitions produce a reward, and how much?
The post MDP modeling exercise first appeared on COMPLIANT PAPERS.