
×
Praise for the First Edition
„Finally, a book devoted to dynamic programming and writtenusing the language of operations research (OR)! This beautiful bookfills a gap in the libraries of OR specialists andpractitioners.“
--Computing Reviews
This new edition showcases a focus on modeling andcomputation for complex classes of approximate dynamic programmingproblems
Understanding approximate dynamic programming (ADP) is vital inorder to develop practical and high-quality solutions to complexindustrial problems, particularly when those problems involvemaking decisions in the presence of uncertainty. ApproximateDynamic Programming, Second Edition uniquely integrates fourdistinct disciplines--Markov decision processes, mathematicalprogramming, simulation, and statistics--to demonstrate how tosuccessfully approach, model, and solve a wide range of real-lifeproblems using ADP.
The book continues to bridge the gap between computer science, simulation, and operations research and now adopts the notation andvocabulary of reinforcement learning as well as stochastic searchand simulation optimization. The author outlines the essentialalgorithms that serve as a starting point in the design ofpractical solutions for real problems. The three curses ofdimensionality that impact complex problems are introduced anddetailed coverage of implementation challenges is provided. TheSecond Edition also features:
* A new chapter describing four fundamental classes of policiesfor working with diverse stochastic optimization problems: myopicpolicies, look-ahead policies, policy function approximations, andpolicies based on value function approximations
* A new chapter on policy search that brings together stochasticsearch and simulation optimization concepts and introduces a newclass of optimal learning strategies
* Updated coverage of the exploration exploitation problem in ADP, now including a recently developed method for doing active learningin the presence of a physical state, using the concept of theknowledge gradient
* A new sequence of chapters describing statistical methods forapproximating value functions, estimating the value of a fixedpolicy, and value function approximation while searching foroptimal policies
The presented coverage of ADP emphasizes models and algorithms, focusing on related applications and computation while alsodiscussing the theoretical side of the topic that explores proofsof convergence and rate of convergence. A related website featuresan ongoing discussion of the evolving fields of approximationdynamic programming and reinforcement learning, along withadditional readings, software, and datasets.
Requiring only a basic understanding of statistics andprobability, Approximate Dynamic Programming, Second Editionis an excellent book for industrial engineering and operationsresearch courses at the upper-undergraduate and graduate levels. Italso serves as a valuable reference for researchers andprofessionals who utilize dynamic programming, stochasticprogramming, and control theory to solve problems in their everydaywork.
„Finally, a book devoted to dynamic programming and writtenusing the language of operations research (OR)! This beautiful bookfills a gap in the libraries of OR specialists andpractitioners.“
--Computing Reviews
This new edition showcases a focus on modeling andcomputation for complex classes of approximate dynamic programmingproblems
Understanding approximate dynamic programming (ADP) is vital inorder to develop practical and high-quality solutions to complexindustrial problems, particularly when those problems involvemaking decisions in the presence of uncertainty. ApproximateDynamic Programming, Second Edition uniquely integrates fourdistinct disciplines--Markov decision processes, mathematicalprogramming, simulation, and statistics--to demonstrate how tosuccessfully approach, model, and solve a wide range of real-lifeproblems using ADP.
The book continues to bridge the gap between computer science, simulation, and operations research and now adopts the notation andvocabulary of reinforcement learning as well as stochastic searchand simulation optimization. The author outlines the essentialalgorithms that serve as a starting point in the design ofpractical solutions for real problems. The three curses ofdimensionality that impact complex problems are introduced anddetailed coverage of implementation challenges is provided. TheSecond Edition also features:
* A new chapter describing four fundamental classes of policiesfor working with diverse stochastic optimization problems: myopicpolicies, look-ahead policies, policy function approximations, andpolicies based on value function approximations
* A new chapter on policy search that brings together stochasticsearch and simulation optimization concepts and introduces a newclass of optimal learning strategies
* Updated coverage of the exploration exploitation problem in ADP, now including a recently developed method for doing active learningin the presence of a physical state, using the concept of theknowledge gradient
* A new sequence of chapters describing statistical methods forapproximating value functions, estimating the value of a fixedpolicy, and value function approximation while searching foroptimal policies
The presented coverage of ADP emphasizes models and algorithms, focusing on related applications and computation while alsodiscussing the theoretical side of the topic that explores proofsof convergence and rate of convergence. A related website featuresan ongoing discussion of the evolving fields of approximationdynamic programming and reinforcement learning, along withadditional readings, software, and datasets.
Requiring only a basic understanding of statistics andprobability, Approximate Dynamic Programming, Second Editionis an excellent book for industrial engineering and operationsresearch courses at the upper-undergraduate and graduate levels. Italso serves as a valuable reference for researchers andprofessionals who utilize dynamic programming, stochasticprogramming, and control theory to solve problems in their everydaywork.