Specific tensor decomposition

Question:

I want to decompose a 3-dimensional tensor using SVD.

I am not quite sure if and, how following decomposition can be achieved.

Desired decomposition

I already know how I can split the tensor horizontally from this tutorial: tensors.org Figure 2.2b

d = 10; A = np.random.rand(d,d,d)
Am = A.reshape(d**2,d)
Um,Sm,Vh = LA.svd(Am,full_matrices=False)
U = Um.reshape(d,d,d); S = np.diag(Sm)
Asked By: gistBatch

||

Answers:

Matrix methods can be naturally extended to higher-orders. SVD, for instance, can be generalized to tensors e.g. with the Tucker decomposition, sometimes called a higher-order SVD.

We maintain a Python library for tensor methods, TensorLy, which lets you do this easily. In this case you want a partial Tucker as you want to leave one of the modes uncompressed.

Let’s import the necessary parts:

import tensorly as tl
from tensorly import random
from tensorly.decomposition import partial_tucker

For testing, let’s create a 3rd order tensor of size (10, 10, 10):

size = 10
order = 3
shape = (size, )*order
tensor = random.random_tensor(shape)

You can now decompose the tensor using the tensor decomposition. In your case, you want to leave one of the dimensions untouched, so you’ll only have two factors (your U and V) and a core tensor (your S):

core, factors = partial_tucker(tensor, rank=size, modes=[0, 2])

You can reconstruct the original tensor from your approximation using a series of n-mode products to contract the core with the factors:

from tensorly import tenalg
rec = tenalg.multi_mode_dot(core, factors, modes=[0, 2])
rec_error = tl.norm(rec - tensor)/tl.norm(tensor)
print(f'Relative reconstruction error: {rec_error}')

In my case, I get

Relative reconstruction error: 9.66027176805661e-16
Answered By: Jean

You can also use "tensorlearn" package in python for example using tensor-train (TT) SVD algorithm.
https://github.com/rmsolgi/TensorLearn/tree/main/Tensor-Train%20Decomposition

import numpy as np
import tensorlearn as tl

#lets generate an arbitrary array 
tensor = np.arange(0,1000) 

#reshaping it into a higher (3) dimensional tensor

tensor = np.reshape(tensor,(10,20,5)) 
epsilon=0.05 
#decompose the tensor to its factors
tt_factors=tl.auto_rank_tt(tensor, epsilon) #epsilon is the error bound

#tt_factors is a list of three arrays which are the tt-cores

#rebuild (estimating) the tensor using the factors again as tensor_hat

tensor_hat=tl.tt_to_tensor(tt_factors)

#lets see the error

error_tensor=tensor-tensor_hat

error=tl.tensor_frobenius_norm(error_tensor)/tl.tensor_frobenius_norm(tensor)

print('error (%)= ',error*100) #which is less than epsilon
# one usage of tensor decomposition is data compression
# So, lets calculate the compression ratio
data_compression_ratio=tl.tt_compression_ratio(tt_factors)

#data saving
data_saving=1-(1/data_compression_ratio)

print('data_saving (%): ', data_saving*100)
Answered By: axar
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.