Markov decision processes under model uncertainty

Ariel Neufeld*, Julian Sester, Mario Šikić

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

We introduce a general framework for Markov decision problems under model uncertainty in a discrete-time infinite horizon setting. By providing a dynamic programming principle, we obtain a local-to-global paradigm, namely solving a local, that is, a one time-step robust optimization problem leads to an optimizer of the global (i.e., infinite time-steps) robust stochastic optimal control problem, as well as to a corresponding worst-case measure. Moreover, we apply this framework to portfolio optimization involving data of the (Figure presented.). We present two different types of ambiguity sets; one is fully data-driven given by a Wasserstein-ball around the empirical measure, the second one is described by a parametric set of multivariate normal distributions, where the corresponding uncertainty sets of the parameters are estimated from the data. It turns out that in scenarios where the market is volatile or bearish, the optimal portfolio strategies from the corresponding robust optimization problem outperforms the ones without model uncertainty, showcasing the importance of taking model uncertainty into account.

Original languageEnglish
Pages (from-to)618-665
Number of pages48
JournalMathematical Finance
Volume33
Issue number3
DOIs
Publication statusPublished - Jul 2023
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2023 Wiley Periodicals LLC.

ASJC Scopus Subject Areas

  • Accounting
  • Finance
  • Social Sciences (miscellaneous)
  • Economics and Econometrics
  • Applied Mathematics

Keywords

  • ambiguity
  • dynamic programming principle
  • Markov decision problem
  • portfolio optimization

Fingerprint

Dive into the research topics of 'Markov decision processes under model uncertainty'. Together they form a unique fingerprint.

Cite this