Low-Rank Plus Sparse Decomposition of Covariance Matrices Using Neural Network Parametrization

Michel Baes, Calypso Herrera, Ariel Neufeld*, Pierre Ruyssen

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

This article revisits the problem of decomposing a positive semidefinite matrix as a sum of a matrix with a given rank plus a sparse matrix. An immediate application can be found in portfolio optimization, when the matrix to be decomposed is the covariance between the different assets in the portfolio. Our approach consists in representing the low-rank part of the solution as the product MMT , where M is a rectangular matrix of appropriate size, parametrized by the coefficients of a deep neural network. We then use a gradient descent algorithm to minimize an appropriate loss function over the parameters of the network. We deduce its convergence rate to a local optimum from the Lipschitz smoothness of our loss function. We show that the rate of convergence grows polynomially in the dimensions of the input-output, and the size of each of the hidden layers.

Original languageEnglish
Pages (from-to)171-185
Number of pages15
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume34
Issue number1
DOIs
Publication statusPublished - Jan 1 2023
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2012 IEEE.

ASJC Scopus Subject Areas

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence

Keywords

  • Correlation matrices
  • low-rank + sparse decomposition
  • neural network parametrization
  • portfolio optimization

Fingerprint

Dive into the research topics of 'Low-Rank Plus Sparse Decomposition of Covariance Matrices Using Neural Network Parametrization'. Together they form a unique fingerprint.

Cite this