Date of Award

Spring 2022

Degree Type

Restricted to Claremont Colleges Dissertation

Degree Name

Mathematics, PhD


Institute of Mathematical Sciences

Advisor/Supervisor/Committee Chair

Ali Nadim

Dissertation or Thesis Committee Member

Lorne Olfman

Dissertation or Thesis Committee Member

Marina Chugunova

Dissertation or Thesis Committee Member

Samir Chatterjee

Terms of Use & License Information

Creative Commons Attribution 4.0 License
This work is licensed under a Creative Commons Attribution 4.0 License.

Rights Information

© 2022 Ionela S Munayco


The Singular Value Decomposition (SVD) of matrices is widely used in least-squares regression, image and data processing, principal component analysis and many other applications. Often only the larger singular values of the matrix are needed to reconstruct the matrix and those below a given threshold can be neglected, truncating the series for an efficient approximation of the original matrix. For the case of tensors which are higher dimensional counterparts of matrices, such a decomposition is not as straightforward to define and obtain. In particular, an approximate decomposition that can be improved iteratively would be ideal, allowing one to stop the iterations when a desired level of accuracy is reached, without having to compute all components first and then truncate the series. In this work, we develop such an approach. We begin with matrices and first present an iterative approach for finding the singular values and their corresponding left and right vectors for a matrix, starting with the largest singular value in the SVD. We discuss the accuracy and convergence rate of the method, as well as its run-time and performance for ill-conditioned matrices or those with repeated or nearby singular values. Our method generalizes readily to the case of tensors, although in that case it is a Canonical Polyadic Decomposition (CPD) of the tensor that is obtained. The CPD partitions a tensor into a sum of rank-one tensors, similar to SVD for matrices, but without the orthogonality of the resulting vector components. We coin the acronym NOTSVD, which stands for Non-Orthogonal Tensor Singular Value Decomposition, to refer to our novel method. We show examples of applying the method to sample three-dimensional tensors. One such example is the compression of RGB images whose three color components are each a matrix, together forming a three-dimensional tensor. We show that the tensor decomposition is capable of better compression, for the same level of accuracy, than applying the SVD to each of the three matrices separately and recombining those to approximate the original image. We also apply the method to artificially generated three-dimensional volumetric images, such as an ellipsoidal object within a cube, and show a highly efficient reconstruction of that volumetric image using a relatively small number of terms obtained using the NOTSVD algorithm. Finally, we apply the method to a real world CT scan and also develop a graphical user interface in MATLAB to apply our method to various tensorial images and objects.