Tensors
Copyright 2022 National Technology & Engineering Solutions of Sandia,
LLC (NTESS). Under the terms of Contract DE-NA0003525 with NTESS, the
U.S. Government retains certain rights in this software.
Tensors are extensions of multidimensial arrays with additional operations defined on them. Here we explain the basics for creating and working with tensors.
import pyttb as ttb
import numpy as np
import sys
Creating a tensor
from an array
M = np.ones((2, 4, 3)) # A 2x4x3 array.
X = ttb.tensor(M) # Convert to a tensor object
X
tensor of shape (2, 4, 3) with order F
data[:, :, 0] =
[[1. 1. 1. 1.]
[1. 1. 1. 1.]]
data[:, :, 1] =
[[1. 1. 1. 1.]
[1. 1. 1. 1.]]
data[:, :, 2] =
[[1. 1. 1. 1.]
[1. 1. 1. 1.]]
Optionally, you can specify a different shape for the tensor
, so long as the input array has the right number of elements.
X = X.reshape((4, 2, 3))
X
tensor of shape (4, 2, 3) with order F
data[:, :, 0] =
[[1. 1.]
[1. 1.]
[1. 1.]
[1. 1.]]
data[:, :, 1] =
[[1. 1.]
[1. 1.]
[1. 1.]
[1. 1.]]
data[:, :, 2] =
[[1. 1.]
[1. 1.]
[1. 1.]
[1. 1.]]
Creating a one-dimensional tensor
np.random.rand(m,n)
creates a two-dimensional tensor with m
rows and n
columns.
np.random.seed(0)
X = ttb.tensor(np.random.rand(5, 1)) # Creates a 2-way tensor.
X
tensor of shape (5, 1) with order F
data[:, :] =
[[0.5488135 ]
[0.71518937]
[0.60276338]
[0.54488318]
[0.4236548 ]]
To specify a 1-way tensor
, use (m,)
syntax, signifying a vector with m
elements.
np.random.seed(0)
X = ttb.tensor(np.random.rand(5), shape=(5,)) # Creates a 1-way tensor.
X
tensor of shape (5,) with order F
data[:] =
[0.5488135 0.71518937 0.60276338 0.54488318 0.4236548 ]
Specifying trailing singleton dimensions in a tensor
Likewise, trailing singleton dimensions must be explicitly specified.
np.random.seed(0)
Y = ttb.tensor(np.random.rand(4, 3)) # Creates a 2-way tensor.
Y
tensor of shape (4, 3) with order F
data[:, :] =
[[0.5488135 0.71518937 0.60276338]
[0.54488318 0.4236548 0.64589411]
[0.43758721 0.891773 0.96366276]
[0.38344152 0.79172504 0.52889492]]
np.random.seed(0)
Y = ttb.tensor(np.random.rand(3, 4, 1), (3, 4, 1)) # Creates a 3-way tensor.
Y
tensor of shape (3, 4, 1) with order F
data[:, :, 0] =
[[0.5488135 0.71518937 0.60276338 0.54488318]
[0.4236548 0.64589411 0.43758721 0.891773 ]
[0.96366276 0.38344152 0.79172504 0.52889492]]
The constituent parts of a tensor
np.random.seed(0)
X = ttb.tenrand((2, 4, 3)) # Create data.
X.data # The array.
WARNING:root:Selected no copy, but input data isn't F ordered so must copy.
array([[[0.5488135 , 0.71518937, 0.60276338],
[0.54488318, 0.4236548 , 0.64589411],
[0.43758721, 0.891773 , 0.96366276],
[0.38344152, 0.79172504, 0.52889492]],
[[0.56804456, 0.92559664, 0.07103606],
[0.0871293 , 0.0202184 , 0.83261985],
[0.77815675, 0.87001215, 0.97861834],
[0.79915856, 0.46147936, 0.78052918]]])
X.shape # The shape.
(2, 4, 3)
Creating a tensor
from its constituent parts
np.random.seed(0)
X = ttb.tenrand((2, 4, 3)) # Create data.
Y = X.copy() # Copies X.
Y
WARNING:root:Selected no copy, but input data isn't F ordered so must copy.
tensor of shape (2, 4, 3) with order F
data[:, :, 0] =
[[0.5488135 0.54488318 0.43758721 0.38344152]
[0.56804456 0.0871293 0.77815675 0.79915856]]
data[:, :, 1] =
[[0.71518937 0.4236548 0.891773 0.79172504]
[0.92559664 0.0202184 0.87001215 0.46147936]]
data[:, :, 2] =
[[0.60276338 0.64589411 0.96366276 0.52889492]
[0.07103606 0.83261985 0.97861834 0.78052918]]
Creating an empty tensor
An empty constructor exists.
X = ttb.tensor() # Creates an empty tensor
X
empty tensor of shape ()
data = []
Use tenones
to create a tensor
of all ones
X = ttb.tenones((2, 3, 4)) # Creates a 2x3x4 tensor of ones.
X
tensor of shape (2, 3, 4) with order F
data[:, :, 0] =
[[1. 1. 1.]
[1. 1. 1.]]
data[:, :, 1] =
[[1. 1. 1.]
[1. 1. 1.]]
data[:, :, 2] =
[[1. 1. 1.]
[1. 1. 1.]]
data[:, :, 3] =
[[1. 1. 1.]
[1. 1. 1.]]
Use tenzeros
to create a tensor
of all zeros
X = ttb.tenzeros((2, 1, 4)) # Creates a 2x1x4 tensor of zeroes.
X
tensor of shape (2, 1, 4) with order F
data[:, :, 0] =
[[0.]
[0.]]
data[:, :, 1] =
[[0.]
[0.]]
data[:, :, 2] =
[[0.]
[0.]]
data[:, :, 3] =
[[0.]
[0.]]
Use tenrand
to create a random tensor
np.random.seed(0)
X = ttb.tenrand((2, 5, 4))
X
WARNING:root:Selected no copy, but input data isn't F ordered so must copy.
tensor of shape (2, 5, 4) with order F
data[:, :, 0] =
[[0.5488135 0.4236548 0.96366276 0.56804456 0.0202184 ]
[0.97861834 0.11827443 0.52184832 0.45615033 0.61209572]]
data[:, :, 1] =
[[0.71518937 0.64589411 0.38344152 0.92559664 0.83261985]
[0.79915856 0.63992102 0.41466194 0.56843395 0.616934 ]]
data[:, :, 2] =
[[0.60276338 0.43758721 0.79172504 0.07103606 0.77815675]
[0.46147936 0.14335329 0.26455561 0.0187898 0.94374808]]
data[:, :, 3] =
[[0.54488318 0.891773 0.52889492 0.0871293 0.87001215]
[0.78052918 0.94466892 0.77423369 0.6176355 0.6818203 ]]
Use squeeze
to remove singleton dimensions from a tensor
np.random.seed(0)
X = ttb.tenrand((2, 5, 4)) # Create the data.
Y = X.copy()
# Add singleton dimension.
Y[0, 0, 0, 0] = Y[0, 0, 0]
# Remove singleton dimension.
Y.squeeze().isequal(X)
WARNING:root:Selected no copy, but input data isn't F ordered so must copy.
True
Use double
to convert a tensor
to a (multidimensional) array
np.random.seed(0)
X = ttb.tenrand((2, 5, 4)) # Create the data.
X.double() # Converts X to an array of doubles.
WARNING:root:Selected no copy, but input data isn't F ordered so must copy.
array([[[0.5488135 , 0.71518937, 0.60276338, 0.54488318],
[0.4236548 , 0.64589411, 0.43758721, 0.891773 ],
[0.96366276, 0.38344152, 0.79172504, 0.52889492],
[0.56804456, 0.92559664, 0.07103606, 0.0871293 ],
[0.0202184 , 0.83261985, 0.77815675, 0.87001215]],
[[0.97861834, 0.79915856, 0.46147936, 0.78052918],
[0.11827443, 0.63992102, 0.14335329, 0.94466892],
[0.52184832, 0.41466194, 0.26455561, 0.77423369],
[0.45615033, 0.56843395, 0.0187898 , 0.6176355 ],
[0.61209572, 0.616934 , 0.94374808, 0.6818203 ]]])
X.data # Same thing.
array([[[0.5488135 , 0.71518937, 0.60276338, 0.54488318],
[0.4236548 , 0.64589411, 0.43758721, 0.891773 ],
[0.96366276, 0.38344152, 0.79172504, 0.52889492],
[0.56804456, 0.92559664, 0.07103606, 0.0871293 ],
[0.0202184 , 0.83261985, 0.77815675, 0.87001215]],
[[0.97861834, 0.79915856, 0.46147936, 0.78052918],
[0.11827443, 0.63992102, 0.14335329, 0.94466892],
[0.52184832, 0.41466194, 0.26455561, 0.77423369],
[0.45615033, 0.56843395, 0.0187898 , 0.6176355 ],
[0.61209572, 0.616934 , 0.94374808, 0.6818203 ]]])
Use ndims
and shape
to get the shape of a tensor
X.ndims # Number of dimensions (or ways).
3
X.shape # Row vector with the shapes of all dimensions.
(2, 5, 4)
X.shape[2] # shape of a single dimension.
4
Subscripted reference for a tensor
np.random.seed(0)
X = ttb.tenrand((2, 3, 4, 1)) # Create a 3x4x2x1 random tensor.
X[0, 0, 0, 0] # Extract a single element.
WARNING:root:Selected no copy, but input data isn't F ordered so must copy.
0.5488135039273248
It is possible to extract a subtensor that contains a single element. Observe that singleton dimensions are not dropped unless they are specifically specified, e.g., as above.
X[0, 0, 0, :] # Produces a tensor of order 1 and shape 1.
tensor of shape (1,) with order F
data[:] =
[0.5488135]
X[0, :, 0, :] # Produces a tensor of shape 3x1.
tensor of shape (3, 1) with order F
data[:, :] =
[[0.5488135 ]
[0.4236548 ]
[0.96366276]]
Moreover, the subtensor is automatically renumbered/resized in the same way that numpy works for arrays except that singleton dimensions are handled explicitly.
X[0:2, 0, [1, 3], :] # Produces a tensor of shape 2x2x1.
tensor of shape (2, 2, 1) with order F
data[:, :, 0] =
[[0.71518937 0.54488318]
[0.92559664 0.0871293 ]]
It’s also possible to extract a list of elements by passing in an array of subscripts or a column array of linear indices.
subs = np.array([[0, 0, 0, 0], [1, 2, 3, 0]])
X[subs] # Extract 2 values by subscript.
array([0.5488135 , 0.78052918])
inds = np.array([0, 23])
X[inds] # Same thing with linear indices.
array([0.5488135 , 0.78052918])
np.random.seed(0)
X = ttb.tenrand((10,)) # Create a random tensor.
X[0:5] # Extract a subtensor.
array([0.5488135 , 0.71518937, 0.60276338, 0.54488318, 0.4236548 ])
Subscripted assignment for a `tensor
We can assign a single element, an entire subtensor, or a list of values for a tensor
.`
np.random.seed(0)
X = ttb.tenrand((2, 3, 4)) # Create some data.
X[0, 0, 0] = 0 # Replaces the [0,0,0] element.
X
WARNING:root:Selected no copy, but input data isn't F ordered so must copy.
tensor of shape (2, 3, 4) with order F
data[:, :, 0] =
[[0. 0.4236548 0.96366276]
[0.56804456 0.0202184 0.97861834]]
data[:, :, 1] =
[[0.71518937 0.64589411 0.38344152]
[0.92559664 0.83261985 0.79915856]]
data[:, :, 2] =
[[0.60276338 0.43758721 0.79172504]
[0.07103606 0.77815675 0.46147936]]
data[:, :, 3] =
[[0.54488318 0.891773 0.52889492]
[0.0871293 0.87001215 0.78052918]]
X[0, 0:2, 0:2] = np.ones((2, 2)) # Replaces a subtensor.
X
tensor of shape (2, 3, 4) with order F
data[:, :, 0] =
[[1. 1. 0.96366276]
[0.56804456 0.0202184 0.97861834]]
data[:, :, 1] =
[[1. 1. 0.38344152]
[0.92559664 0.83261985 0.79915856]]
data[:, :, 2] =
[[0.60276338 0.43758721 0.79172504]
[0.07103606 0.77815675 0.46147936]]
data[:, :, 3] =
[[0.54488318 0.891773 0.52889492]
[0.0871293 0.87001215 0.78052918]]
X[(0, 0, 0)], X[1, 0, 0] = [5, 7] # Replaces the (0,0,0) and (1,0,0) elements.
X[[0, 1]] = [5, 7] # Same as above using linear indices.
X
tensor of shape (2, 3, 4) with order F
data[:, :, 0] =
[[5. 1. 0.96366276]
[7. 0.0202184 0.97861834]]
data[:, :, 1] =
[[1. 1. 0.38344152]
[0.92559664 0.83261985 0.79915856]]
data[:, :, 2] =
[[0.60276338 0.43758721 0.79172504]
[0.07103606 0.77815675 0.46147936]]
data[:, :, 3] =
[[0.54488318 0.891773 0.52889492]
[0.0871293 0.87001215 0.78052918]]
It is possible to grow the tensor
automatically by assigning elements outside the original range of the tensor
.
X[2, 1, 1] = 1 # Grows the shape of the tensor.
X
tensor of shape (3, 3, 4) with order F
data[:, :, 0] =
[[5. 1. 0.96366276]
[7. 0.0202184 0.97861834]
[0. 0. 0. ]]
data[:, :, 1] =
[[1. 1. 0.38344152]
[0.92559664 0.83261985 0.79915856]
[0. 1. 0. ]]
data[:, :, 2] =
[[0.60276338 0.43758721 0.79172504]
[0.07103606 0.77815675 0.46147936]
[0. 0. 0. ]]
data[:, :, 3] =
[[0.54488318 0.891773 0.52889492]
[0.0871293 0.87001215 0.78052918]
[0. 0. 0. ]]
Using negative indexing for the last array index
np.random.seed(0)
X = ttb.tenrand((2, 3, 4)) # Create some data.
np.prod(X.shape) - 1 # The index of the last element of the flattened tensor.
WARNING:root:Selected no copy, but input data isn't F ordered so must copy.
np.int64(23)
X[2, 2, 3] = 99 # Inserting 99 into last element
X[-1] # Same as X[2,2,3]
np.float64(99.0)
X[0:-1]
array([0.5488135 , 0.56804456, 0. , 0.4236548 , 0.0202184 ,
0. , 0.96366276, 0.97861834, 0. , 0.71518937,
0.92559664, 0. , 0.64589411, 0.83261985, 0. ,
0.38344152, 0.79915856, 0. , 0.60276338, 0.07103606,
0. , 0.43758721, 0.77815675, 0. , 0.79172504,
0.46147936, 0. , 0.54488318, 0.0871293 , 0. ,
0.891773 , 0.87001215, 0. , 0.52889492, 0.78052918])
Use find
for subscripts of nonzero elements of a tensor
np.random.seed(0)
X = ttb.tensor(3 * np.random.rand(2, 2, 2)) # Generate some data.
X
tensor of shape (2, 2, 2) with order F
data[:, :, 0] =
[[1.64644051 1.80829013]
[1.2709644 1.31276163]]
data[:, :, 1] =
[[2.1455681 1.63464955]
[1.93768234 2.675319 ]]
S, V = X.find() # Find all the nonzero subscripts and values.
S # Nonzero subscripts
array([[0, 0, 0],
[1, 0, 0],
[0, 1, 0],
[1, 1, 0],
[0, 0, 1],
[1, 0, 1],
[0, 1, 1],
[1, 1, 1]])
V # Values
array([[1.64644051],
[1.2709644 ],
[1.80829013],
[1.31276163],
[2.1455681 ],
[1.93768234],
[1.63464955],
[2.675319 ]])
larger_entries = X >= 2
larger_subs, larger_vals = larger_entries.find() # Find subscripts of values >= 2.
larger_subs, larger_vals
(array([[0, 0, 1],
[1, 1, 1]]),
array([[ True],
[ True]]))
V = X[larger_subs]
V
array([2.1455681, 2.675319 ])
Computing the Frobenius norm of a tensor
norm
computes the Frobenius norm of a tensor. This corresponds to the Euclidean norm of the vectorized tensor.
np.random.seed(0)
X = ttb.tensor(np.ones((3, 2, 3)))
X.norm()
4.242640687119285
Using reshape
to rearrange elements in a tensor
reshape
reshapes a tensor into a given shape array. The total number of elements in the tensor cannot change.
np.random.seed(0)
X = ttb.tensor(np.random.rand(3, 2, 3, 10))
X.reshape((6, 30))
tensor of shape (6, 30) with order F
data[:, :] =
[[0.5488135 0.79172504 0.97861834 0.71518937 0.52889492 0.79915856
0.60276338 0.56804456 0.46147936 0.54488318 0.92559664 0.78052918
0.4236548 0.07103606 0.11827443 0.64589411 0.0871293 0.63992102
0.43758721 0.0202184 0.14335329 0.891773 0.83261985 0.94466892
0.96366276 0.77815675 0.52184832 0.38344152 0.87001215 0.41466194]
[0.15896958 0.97645947 0.31798318 0.11037514 0.4686512 0.41426299
0.65632959 0.97676109 0.0641475 0.13818295 0.60484552 0.69247212
0.19658236 0.73926358 0.56660145 0.36872517 0.03918779 0.26538949
0.82099323 0.28280696 0.52324805 0.09710128 0.12019656 0.09394051
0.83794491 0.2961402 0.5759465 0.09609841 0.11872772 0.9292962 ]
[0.72525428 0.61801543 0.8965466 0.50132438 0.4287687 0.36756187
0.95608363 0.13547406 0.43586493 0.6439902 0.29828233 0.89192336
0.42385505 0.56996491 0.80619399 0.60639321 0.59087276 0.70388858
0.0191932 0.57432525 0.10022689 0.30157482 0.65320082 0.91948261
0.66017354 0.65210327 0.7142413 0.29007761 0.43141844 0.99884701]
[0.26455561 0.3595079 0.57019677 0.77423369 0.43703195 0.43860151
0.45615033 0.6976312 0.98837384 0.56843395 0.06022547 0.10204481
0.0187898 0.66676672 0.20887676 0.6176355 0.67063787 0.16130952
0.61209572 0.21038256 0.65310833 0.616934 0.1289263 0.2532916
0.94374808 0.31542835 0.46631077 0.6818203 0.36371077 0.24442559]
[0.31856895 0.67781654 0.44712538 0.66741038 0.27000797 0.84640867
0.13179786 0.73519402 0.69947928 0.7163272 0.96218855 0.29743695
0.28940609 0.24875314 0.81379782 0.18319136 0.57615733 0.39650574
0.58651293 0.59204193 0.8811032 0.02010755 0.57225191 0.58127287
0.82894003 0.22308163 0.88173536 0.00469548 0.95274901 0.69253159]
[0.1494483 0.69742877 0.52103661 0.86812606 0.45354268 0.05433799
0.16249293 0.7220556 0.19999652 0.61555956 0.86638233 0.01852179
0.12381998 0.97552151 0.7936977 0.84800823 0.85580334 0.22392469
0.80731896 0.01171408 0.34535168 0.56910074 0.35997806 0.92808129
0.4071833 0.72999056 0.7044144 0.069167 0.17162968 0.03183893]]
Basic operations (plus, minus, and, or, etc.) on a tensor
tensor
s support plus, minus, times, divide, power, equals, and not-equals operators. tensor
s can use their operators with another tensor
or a scalar (with the exception of equalities which only takes tensor
s). All mathematical operators are elementwise operations.
np.random.seed(0)
A = ttb.tensor(np.floor(3 * np.random.rand(2, 2, 3))) # Generate some data.
B = ttb.tensor(np.floor(3 * np.random.rand(2, 2, 3)))
A.logical_and(B) # Calls and.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[1. 0.]
[1. 1.]]
data[:, :, 1] =
[[1. 0.]
[1. 1.]]
data[:, :, 2] =
[[0. 1.]
[1. 1.]]
A.logical_or(B)
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[1. 1.]
[1. 1.]]
data[:, :, 1] =
[[1. 1.]
[1. 1.]]
data[:, :, 2] =
[[1. 1.]
[1. 1.]]
A.logical_xor(B)
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[0. 1.]
[0. 0.]]
data[:, :, 1] =
[[0. 1.]
[0. 0.]]
data[:, :, 2] =
[[1. 0.]
[0. 0.]]
A == B # Calls eq.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[ True False]
[False False]]
data[:, :, 1] =
[[ True False]
[ True False]]
data[:, :, 2] =
[[False False]
[ True False]]
A != B # Calls neq.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[False True]
[ True True]]
data[:, :, 1] =
[[False True]
[False True]]
data[:, :, 2] =
[[ True True]
[False True]]
A > B # Calls gt.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[False True]
[False False]]
data[:, :, 1] =
[[False True]
[False True]]
data[:, :, 2] =
[[ True False]
[False False]]
A >= B # Calls ge.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[ True True]
[False False]]
data[:, :, 1] =
[[ True True]
[ True True]]
data[:, :, 2] =
[[ True False]
[ True False]]
A < B # Calls lt.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[False False]
[ True True]]
data[:, :, 1] =
[[False False]
[False False]]
data[:, :, 2] =
[[False True]
[False True]]
A <= B # Calls le.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[ True False]
[ True True]]
data[:, :, 1] =
[[ True False]
[ True False]]
data[:, :, 2] =
[[False True]
[ True True]]
A.logical_not() # Calls not.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[0. 0.]
[0. 0.]]
data[:, :, 1] =
[[0. 0.]
[0. 0.]]
data[:, :, 2] =
[[0. 0.]
[0. 0.]]
+A # Calls uplus.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[1. 1.]
[1. 1.]]
data[:, :, 1] =
[[2. 1.]
[2. 2.]]
data[:, :, 2] =
[[1. 1.]
[2. 1.]]
-A # Calls uminus.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[-1. -1.]
[-1. -1.]]
data[:, :, 1] =
[[-2. -1.]
[-2. -2.]]
data[:, :, 2] =
[[-1. -1.]
[-2. -1.]]
A + B # Calls plus.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[2. 1.]
[3. 3.]]
data[:, :, 1] =
[[4. 1.]
[4. 3.]]
data[:, :, 2] =
[[1. 3.]
[4. 3.]]
A - B # Calls minus.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[ 0. 1.]
[-1. -1.]]
data[:, :, 1] =
[[0. 1.]
[0. 1.]]
data[:, :, 2] =
[[ 1. -1.]
[ 0. -1.]]
A * B # Calls times.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[1. 0.]
[2. 2.]]
data[:, :, 1] =
[[4. 0.]
[4. 2.]]
data[:, :, 2] =
[[0. 2.]
[4. 2.]]
5 * A # Calls mtimes.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[5. 5.]
[5. 5.]]
data[:, :, 1] =
[[10. 5.]
[10. 10.]]
data[:, :, 2] =
[[ 5. 5.]
[10. 5.]]
A**B # Calls power.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[1. 1.]
[1. 1.]]
data[:, :, 1] =
[[4. 1.]
[4. 2.]]
data[:, :, 2] =
[[1. 1.]
[4. 1.]]
A**2 # Calls power.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[1. 1.]
[1. 1.]]
data[:, :, 1] =
[[4. 1.]
[4. 4.]]
data[:, :, 2] =
[[1. 1.]
[4. 1.]]
A / B # Calls ldivide.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[1. inf]
[0.5 0.5]]
data[:, :, 1] =
[[ 1. inf]
[ 1. 2.]]
data[:, :, 2] =
[[inf 0.5]
[1. 0.5]]
2 / A # Calls rdivide.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[2. 2.]
[2. 2.]]
data[:, :, 1] =
[[1. 2.]
[1. 1.]]
data[:, :, 2] =
[[2. 2.]
[1. 2.]]
Using tenfun
for elementwise operations on one or more tensor
s
The method tenfun
applies a specified function to a number of tensor
s. This can be used for any function that is not predefined for tensor
s.
np.random.seed(0)
A = ttb.tensor(np.floor(3 * np.random.rand(2, 2, 3), order="F")) # Generate some data.
A.tenfun(lambda x: x + 1) # Increment every element of A by one.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[2. 2.]
[2. 2.]]
data[:, :, 1] =
[[3. 2.]
[3. 3.]]
data[:, :, 2] =
[[2. 2.]
[3. 2.]]
# Wrap np.maximum in a function with a function signature that Python's inspect.signature can handle.
def max_elements(a, b):
return np.maximum(a, b)
A.tenfun(max_elements, B) # Max of A and B, elementwise.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[1. 1.]
[2. 2.]]
data[:, :, 1] =
[[2. 1.]
[2. 2.]]
data[:, :, 2] =
[[1. 2.]
[2. 2.]]
np.random.seed(0)
C = ttb.tensor(
np.floor(5 * np.random.rand(2, 2, 3), order="F")
) # Create another tensor.
def elementwise_mean(X):
# finding mean for the columns
return np.floor(np.mean(X, axis=0), order="F")
A.tenfun(elementwise_mean, B, C) # Elementwise means for A, B, and C.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[1. 1.]
[1. 1.]]
data[:, :, 1] =
[[2. 1.]
[2. 2.]]
data[:, :, 2] =
[[1. 2.]
[2. 1.]]
Use permute
to reorder the modes of a tensor
X = ttb.tensor(np.arange(1, 25), shape=(2, 3, 4))
print(f"X is a {X}")
X is a tensor of shape (2, 3, 4) with order F
data[:, :, 0] =
[[1 3 5]
[2 4 6]]
data[:, :, 1] =
[[ 7 9 11]
[ 8 10 12]]
data[:, :, 2] =
[[13 15 17]
[14 16 18]]
data[:, :, 3] =
[[19 21 23]
[20 22 24]]
X.permute(np.array((2, 1, 0))) # Reverse the modes.
tensor of shape (4, 3, 2) with order F
data[:, :, 0] =
[[ 1 3 5]
[ 7 9 11]
[13 15 17]
[19 21 23]]
data[:, :, 1] =
[[ 2 4 6]
[ 8 10 12]
[14 16 18]
[20 22 24]]
Permuting a 1-dimensional tensor works correctly.
X = ttb.tensor(np.arange(1, 5), (4,))
X.permute(
np.array(
1,
)
)
tensor of shape (4,) with order F
data[:] =
[1 2 3 4]
Symmetrizing and checking for symmetry in a tensor
A tensor
can be symmetrized in a collection of modes with the command symmetrize
. The new, symmetric tensor
is formed by averaging over all elements in the tensor
which are required to be equal.
np.random.rand(0)
X = ttb.tensor(np.arange(1, 5), (4,)) # Create some data
W = ttb.tensor(np.random.rand(4, 4, 4))
Y = X.symmetrize()
An optional argument grps
can also be passed to symmetrize
which specifies an array of modes with respect to which the tensor
should be symmetrized.
np.random.seed(0)
X = ttb.tensor(np.random.rand(3, 3, 2))
Z = X.symmetrize(np.array((0, 1)))
Additionally, one can check for symmetry in tensors with the issymmetric
function. Similar to symmetrize
, a collection of modes can be passed as a second argument.
Y.issymmetric()
True
Z.issymmetric(np.array((1, 2)))
False
Displaying a tensor
print(X)
tensor of shape (3, 3, 2) with order F
data[:, :, 0] =
[[0.5488135 0.60276338 0.4236548 ]
[0.43758721 0.96366276 0.79172504]
[0.56804456 0.07103606 0.0202184 ]]
data[:, :, 1] =
[[0.71518937 0.54488318 0.64589411]
[0.891773 0.38344152 0.52889492]
[0.92559664 0.0871293 0.83261985]]
X # In the python interface
tensor of shape (3, 3, 2) with order F
data[:, :, 0] =
[[0.5488135 0.60276338 0.4236548 ]
[0.43758721 0.96366276 0.79172504]
[0.56804456 0.07103606 0.0202184 ]]
data[:, :, 1] =
[[0.71518937 0.54488318 0.64589411]
[0.891773 0.38344152 0.52889492]
[0.92559664 0.0871293 0.83261985]]