Decision Processes in Dynamic Probabilistic Systems von A.V. Gheorghe | ISBN 9780792305446

Decision Processes in Dynamic Probabilistic Systems

von A.V. Gheorghe
Buchcover Decision Processes in Dynamic Probabilistic Systems | A.V. Gheorghe | EAN 9780792305446 | ISBN 0-7923-0544-2 | ISBN 978-0-7923-0544-6

Decision Processes in Dynamic Probabilistic Systems

von A.V. Gheorghe

Inhaltsverzeichnis

  • 1 Semi-Markov and Markov Chains.
  • 1.1 Definitions and basic properties.
  • 1.2 Algebraic and analytical methods in the study of Markovian systems.
  • 1.3 Transient and recurrent processes.
  • 1.4 Markovian populations.
  • 1.5 Partially observable Markov chains.
  • 1.6 Rewards and discounting.
  • 1.7 Models and applications.
  • 1.8 Dynamic-decision models for clinical diagnosis.
  • 2 Dynamic and Linear Programming.
  • 2.1 Discrete dynamic programming.
  • 2.2 A linear programming formulation and an algorithm for computation.
  • 3 Utility Functions and Decisions under Risk.
  • 3.1 Informational lotteries and axioms for utility functions.
  • 3.2 Exponential utility functions.
  • 3.3 Decisions under risk and uncertainty; event trees.
  • 3.4 Probability encoding.
  • 4 Markovian Decision Processes (Semi-Markov and Markov) with Complete Information (Completely Observable).
  • 4.1 Value iteration algorithm (the finite horizon case).
  • 4.2 Policy iteration algorithm (the finite horizon optimization).
  • 4.3 Policy iteration with discounting.
  • 4.4 Optimization algorithm using linear programming.
  • 4.5 Risk-sensitive decision processes.
  • 4.6 On eliminating sub-optimal decision alternatives in Markov and semi-Markov decision processes.
  • 5 Partially Observable Markovian Decision Processes.
  • 5.1 Finite horizon partially observable Markov decision processes.
  • 5.2 The infinite horizon with discounting for partially observable Markov decision processes.
  • 5.3 A useful policy iteration algorithm, for discounted (? < 1) partially observable Markov decision processes.
  • 5.4 The infinite horizon without discounting for partially observable Markov processes.
  • 5.5 Partially observable semi-Markov decision processes.
  • 5.6 Risk-sensitive partially observable Markov decision processes.
  • 6 Policy Constraints in Markov DecisionProcesses.
  • 6.1 Methods of investigating policy costraints in Markov decision processes.
  • 6.2 Markov decision processes with policy constraints.
  • 6.3 Risk-sensitive Markov decision process with policy constraints.
  • 7 Applications.
  • 7.1 The emergency repair control for electrical power systems.
  • 7.2 Stochastic models for evaluation of inspection and repair schedules [2].
  • 7.3 A Markovian dicision model for clinical diagnosis and treatment applied to the respiratory system.