Tensor Flow Basics -1

My initial learnings on tensors
Tensor Basics
Author

Arun Koundinya Parasa

Published

March 20, 2024

These are initial learnings on tensors hope it is helpful for everyone

Open In Colab

import tensorflow as tf

What is tensor Flow?

Tensor flow is deep learning framework developed by Google in 2011 and made it publicly available in the year 2015.

It is flexible, scalable solutio that enable us to build models with the existing frameworks.

It is like sci-kit learn library but more advanced and flexible as we can custom build our own neural network.

print(tf.__version__)
2.15.0

Basics Unit in TensorFlow Framework - Tensor

Tensors are multi-dimensional arrays designed for numerical data representation; although they share some similarities with NumPy arrays, they possess certain unique features that give them an advantage in deep learning tasks. One of these key advantages is their ability to utilize hardware acceleration from GPUs and TPUs to significantly speed up computational operations, which is especially useful when working with input data such as images, text, and videos.

In simple words “ML” needs numbers; In case of large dimensions we need matrices. These matrices are called tensors. As these are specially designed to use hardware capabilities to accelerate learnings.

Creating Tensors

tf.constant(1, shape=(2,2))
<tf.Tensor: shape=(2, 2), dtype=int32, numpy=
array([[1, 1],
       [1, 1]], dtype=int32)>

Here we have created a basic tensor with constant number 1 with a shape 2,2 i.e.; two rows and two columns.

And its datatype is integer.

It is a numpy array.

## Manually providing the shape
y = tf.constant([[1, 2, 3], [4, 5, 6]])
print(y)
tf.Tensor(
[[1 2 3]
 [4 5 6]], shape=(2, 3), dtype=int32)

Instead of giving the shape here we have manually given the values

tf.rank(y)
<tf.Tensor: shape=(), dtype=int32, numpy=2>

Here we are checking what is the rank of the tensor.

print("The rank of scalar is " , tf.rank(tf.constant(1)))
print("The rank of vector is " , tf.rank(tf.constant(1,shape=(5))))
print("The rank of matrix is " , tf.rank(tf.constant(1,shape=(5,4))))
print("The rank of rank3tensor is " , tf.rank(tf.constant(1,shape=(4,2,3))))
The rank of scalar is  tf.Tensor(0, shape=(), dtype=int32)
The rank of vector is  tf.Tensor(1, shape=(), dtype=int32)
The rank of matrix is  tf.Tensor(2, shape=(), dtype=int32)
The rank of rank3tensor is  tf.Tensor(3, shape=(), dtype=int32)

Can there me more than 3 dimensional; of course, but we cannot represent them pictographically.

print("The rank of rank5tensor is " , tf.rank(tf.constant(1,shape=(4,2,3,3,3))))
print("The rank of rank9tensor is " , tf.rank(tf.constant(1,shape=(4,2,3,3,3,1,1,3,3))))
The rank of rank5tensor is  tf.Tensor(5, shape=(), dtype=int32)
The rank of rank9tensor is  tf.Tensor(9, shape=(), dtype=int32)

Basic Tensor Operations

tf.constant(1.1, shape=(2,2))
<tf.Tensor: shape=(2, 2), dtype=float32, numpy=
array([[1.1, 1.1],
       [1.1, 1.1]], dtype=float32)>

TypeCasting

The moment we kept 1.1 the data-type has changed to float; Lets check how to typecast integer to float and vice versa

x_int = tf.constant(1, shape=(2,2))
print(x_int)
x_float = tf.cast(x_int, dtype = tf.float32)
print(x_float)
x_float_int = tf.cast(x_float, tf.int32)
print(x_float_int)
tf.Tensor(
[[1 1]
 [1 1]], shape=(2, 2), dtype=int32)
tf.Tensor(
[[1. 1.]
 [1. 1.]], shape=(2, 2), dtype=float32)
tf.Tensor(
[[1 1]
 [1 1]], shape=(2, 2), dtype=int32)

Indexing

Similar to numpy array we can do indexing for the tensors

y =  tf.constant([[1,2,3],[4,5,6]])
y
<tf.Tensor: shape=(2, 3), dtype=int32, numpy=
array([[1, 2, 3],
       [4, 5, 6]], dtype=int32)>
y[0]
<tf.Tensor: shape=(3,), dtype=int32, numpy=array([1, 2, 3], dtype=int32)>
y[0][0]
<tf.Tensor: shape=(), dtype=int32, numpy=1>

Expanding a matrix

y =  tf.constant([[1,2,3],[4,5,6]])
y
<tf.Tensor: shape=(2, 3), dtype=int32, numpy=
array([[1, 2, 3],
       [4, 5, 6]], dtype=int32)>
print(tf.expand_dims(y,axis=0)) ## Expanding at the beginning of the tensor
print(tf.expand_dims(y,axis=1)) ## Expanding at the Middle of the tensor { for this example}
print(tf.expand_dims(y,axis=-1)) ## Expanding at the End of the tensor
tf.Tensor(
[[[1 2 3]
  [4 5 6]]], shape=(1, 2, 3), dtype=int32)
tf.Tensor(
[[[1 2 3]]

 [[4 5 6]]], shape=(2, 1, 3), dtype=int32)
tf.Tensor(
[[[1]
  [2]
  [3]]

 [[4]
  [5]
  [6]]], shape=(2, 3, 1), dtype=int32)

Tensor Aggregation

y
<tf.Tensor: shape=(2, 3), dtype=int32, numpy=
array([[1, 2, 3],
       [4, 5, 6]], dtype=int32)>
print("Smallest of the number is ",tf.reduce_min(y).numpy())
print("Largest of the number is ",tf.reduce_max(y).numpy())
Smallest of the number is  1
Largest of the number is  6
print("Sum of the numbers are ",tf.reduce_sum(y).numpy())
print("Average of the numbers are ",tf.reduce_mean(y).numpy())
Sum of the numbers are  21
Average of the numbers are  3

Matrix with all ones,zeroes and identity

z =  tf.ones([2,3])
print(z)
print(" ")

x  =  tf.constant(1,shape=(2,3),dtype=tf.float32)
print(x)
print(" ")

z =  tf.zeros([2,3])
print(z)
print(" ")

z = tf.eye(3)
print(z)
tf.Tensor(
[[1. 1. 1.]
 [1. 1. 1.]], shape=(2, 3), dtype=float32)
 
tf.Tensor(
[[1. 1. 1.]
 [1. 1. 1.]], shape=(2, 3), dtype=float32)
 
tf.Tensor(
[[0. 0. 0.]
 [0. 0. 0.]], shape=(2, 3), dtype=float32)
 
tf.Tensor(
[[1. 0. 0.]
 [0. 1. 0.]
 [0. 0. 1.]], shape=(3, 3), dtype=float32)

Reshaping and Transposing Tensors

x_initial = tf.constant(1, shape=(4,3))
print(x_initial)
tf.Tensor(
[[1 1 1]
 [1 1 1]
 [1 1 1]
 [1 1 1]], shape=(4, 3), dtype=int32)
tf.reshape(x_initial,shape=(2,2,3))
<tf.Tensor: shape=(2, 2, 3), dtype=int32, numpy=
array([[[1, 1, 1],
        [1, 1, 1]],

       [[1, 1, 1],
        [1, 1, 1]]], dtype=int32)>
tf.reshape(x_initial,shape=(2,6))
<tf.Tensor: shape=(2, 6), dtype=int32, numpy=
array([[1, 1, 1, 1, 1, 1],
       [1, 1, 1, 1, 1, 1]], dtype=int32)>
tf.reshape(x_initial,shape=(12,1))
<tf.Tensor: shape=(12, 1), dtype=int32, numpy=
array([[1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1],
       [1]], dtype=int32)>

Here we can reshape to any other shape; However, the multiplication of shapes should remain the same.

tf.reshape(x_initial,shape=-1) #Flatten the array
<tf.Tensor: shape=(12,), dtype=int32, numpy=array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], dtype=int32)>
tf.transpose(x_initial)
<tf.Tensor: shape=(3, 4), dtype=int32, numpy=
array([[1, 1, 1, 1],
       [1, 1, 1, 1],
       [1, 1, 1, 1]], dtype=int32)>

The initial shape of (4,3) got changed to (3,4)

Distributions

x1  = tf.random.normal((3,3))
print(x1)
print(" ")

x1  = tf.random.normal((3,3),mean = 0, stddev =1 )
print(x1)
print(" ")

x2 =  tf.random.uniform((3,3))
print(x2)
print(" ")
tf.Tensor(
[[ 0.591363    0.12791212  0.38762185]
 [ 0.26025018  1.7209182  -0.7802837 ]
 [ 0.89150804 -0.9648455   0.64507854]], shape=(3, 3), dtype=float32)
 
tf.Tensor(
[[-1.0034778   0.05435322 -1.3141975 ]
 [-0.17819698 -1.9136705  -0.9396771 ]
 [ 2.2143493   0.33600262  0.8174351 ]], shape=(3, 3), dtype=float32)
 
tf.Tensor(
[[0.04812372 0.79855204 0.8067709 ]
 [0.67069924 0.06617999 0.14941025]
 [0.31500185 0.17441607 0.7476181 ]], shape=(3, 3), dtype=float32)
 

Mathematical Operations

  • Addition
  • Subtraction
  • Multiplication
  • Division
x4 = tf.random.normal((3,3))
print(x4)
print(" ")
y4 = tf.random.normal((3,3))
print(y4)
print(" ")
tf.Tensor(
[[ 0.5908952   3.5452905  -0.34438497]
 [-0.5237503   1.2899861  -0.50684774]
 [ 1.2187229   0.50014    -0.6212071 ]], shape=(3, 3), dtype=float32)
 
tf.Tensor(
[[-0.7346279   2.9705956  -0.45318994]
 [-0.947753    0.6556651   3.018978  ]
 [-1.5799906  -1.2278746   0.57451475]], shape=(3, 3), dtype=float32)
 
print(x4+y4)
print(" ")
tf.add(x4,y4)
tf.Tensor(
[[-0.14373273  6.5158863  -0.7975749 ]
 [-1.4715033   1.9456513   2.5121303 ]
 [-0.3612677  -0.7277346  -0.04669237]], shape=(3, 3), dtype=float32)
 
<tf.Tensor: shape=(3, 3), dtype=float32, numpy=
array([[-0.14373273,  6.5158863 , -0.7975749 ],
       [-1.4715033 ,  1.9456513 ,  2.5121303 ],
       [-0.3612677 , -0.7277346 , -0.04669237]], dtype=float32)>
print(x4-y4)
print(" ")
tf.subtract(x4,y4)
tf.Tensor(
[[ 1.3255231   0.5746949   0.10880497]
 [ 0.4240027   0.63432103 -3.525826  ]
 [ 2.7987137   1.7280147  -1.1957219 ]], shape=(3, 3), dtype=float32)
 
<tf.Tensor: shape=(3, 3), dtype=float32, numpy=
array([[ 1.3255231 ,  0.5746949 ,  0.10880497],
       [ 0.4240027 ,  0.63432103, -3.525826  ],
       [ 2.7987137 ,  1.7280147 , -1.1957219 ]], dtype=float32)>
print(x4*y4)
print(" ")
tf.multiply(x4,y4)
tf.Tensor(
[[-0.43408808 10.531624    0.1560718 ]
 [ 0.49638593  0.8457989  -1.5301622 ]
 [-1.9255708  -0.6141092  -0.35689265]], shape=(3, 3), dtype=float32)
 
<tf.Tensor: shape=(3, 3), dtype=float32, numpy=
array([[-0.43408808, 10.531624  ,  0.1560718 ],
       [ 0.49638593,  0.8457989 , -1.5301622 ],
       [-1.9255708 , -0.6141092 , -0.35689265]], dtype=float32)>
print(x4/y4)
print(" ")
tf.divide(x4,y4)
tf.Tensor(
[[-0.8043462   1.1934612   0.7599131 ]
 [ 0.5526232   1.9674467  -0.16788718]
 [-0.77134824 -0.40732172 -1.0812727 ]], shape=(3, 3), dtype=float32)
 
<tf.Tensor: shape=(3, 3), dtype=float32, numpy=
array([[-0.8043462 ,  1.1934612 ,  0.7599131 ],
       [ 0.5526232 ,  1.9674467 , -0.16788718],
       [-0.77134824, -0.40732172, -1.0812727 ]], dtype=float32)>

Matrix Multiplications

x4_new = tf.random.normal((3,2))
y4_new = tf.random.normal((2,3))

print(tf.matmul(x4_new,y4_new))
tf.Tensor(
[[ 0.5675167   0.29923582 -2.018334  ]
 [-0.5145724   1.9392778  -1.5560541 ]
 [ 1.0566927  -2.3777525   0.73752195]], shape=(3, 3), dtype=float32)
x4_new @ y4_new
<tf.Tensor: shape=(3, 3), dtype=float32, numpy=
array([[ 0.5675167 ,  0.29923582, -2.018334  ],
       [-0.5145724 ,  1.9392778 , -1.5560541 ],
       [ 1.0566927 , -2.3777525 ,  0.73752195]], dtype=float32)>

Two ways of matrix of multiplication in tensors; There is also a way using dot product; which we will discuss later :)