Introduction to TensorFlow Fundamentals

Introduction to TensorFlow Fundamentals

This is the first lab in the Machine Learning with TensorFlow series. The primary focus of this lab is to help you understand the fundamental components of TensorFlow.

Learning Objectives:

  • Creating scalars, vectors, matrices, and tensors
  • Inspecting different attributes of a tensor.
  • Performing basic operations on tensors.
  • Learn to index and slice tensors.
  • Reshaping and manipulating tensors.

To get the most out of this series:

  • you should know how to write code, ideally have some experience programming in Python.
  • have a decent understanding of variables, linear equations, matrices, calculus, and statistics.
  • don't just stick to the examples and code I've provided, play around and break things - that's the best way to learn.

Code/Environment Setup

In order to try out the code snippets or complete to-do tasks mentioned in each section, you can use Google Colaboratory notebook. Colab is basically Google's implementation of Jupyter Notebooks where you can run your code.

You can learn more about working with Colabs by following this notebook.

Inspecting the attributes of a given tensor

If you need to inspect a tensor, you can use the following properties and methods:

  1. Shape
  2. Size of the tensor
  3. Number of axes/dimensions of the tensor
  4. Elements in a particular axis
  5. Type of data

Let's first create a rank-4 tensor using tf.zeros() method that creates a tensor with all elements = 0. We need to pass the required shape of the tensor.

r4_tensor = tf.zeros([3,4,5,6], dtype='int32')

Here's an interesting visual from TensorFlow to understand what rank-4 tensor would look like:

Also, it is very important to understand what each axis means in a rank-4 tensor. Here's what a typical axis order looks like:

Now, to print all the information about this tensor, you can use the following attributes and methods:

##let's look at all the attributes of this tensor:
print(f"Shape of the Tensor: {r4_tensor.shape}")
print(f"Number of axes in the Tensor: {r4_tensor.ndim}")
print(f"Size/Total number of elements in the tensor(3*4*5*6): {tf.size(r4_tensor.numpy())}")
print(f"Type of each element: {r4_tensor.dtype}")
print(f"Elements in the first axis: {r4_tensor.shape[0]}")
print(f"Elements in the last axis: {r4_tensor.shape[-1]}")

Go over each line of code and try to play around with different values and tensors.

Some other methods of creating tensors that you should try out:

Creating a tensor using tf.Variable(), maintains a shared, persistent state manipulated by a program which is not possible with tf.constant():

v = tf.Variable([3,4,5,6], dtype='int32')
print(v)

Creating tensors with uniformly distributed data:

u = tf.random.uniform([2,3])
print(u)

Note: An important point here to be noted is that the base tf.Tensor class requires tensors to be "rectangular"---that is, along each axis, every element is the same size.

Output:

Apart from these methods, there is a lot more you can learn about setting seed for reproducibility and different distribution methods here.

Next Steps!

Woah! tensors do contain a lot underneath. There is a lot you can do with them and images, text, structured data, all of them have to be turned into these tensors before you feed them to a neural network. Isn't it amazing!

Whenever we talk about numbers, first thing that comes to mind is mathematical operations and yes, we can perform mathematical operations on tensors. 

Let's see how!

Discussion