• The TensorFlow tutorials are written as Jupyter notebooks and run directly in Google Colab—a hosted notebook environment that requires no setup. Click the Run in Google Colab button.


  • Colab link - Open colab


  • In this page, again we used neural network for classifying MNIST dataset.


  • Keras is a high level library that uses tensorflow. In keras.layers we get neural network layers such as Dense, Flatten, Conv2D (for 2D convolutional networks)


  •  
    import tensorflow as tf
    
    from tensorflow.keras.layers import Dense, Flatten, Conv2D
    from tensorflow.keras import Model
    
      
    
  • Load and prepare the [MNIST dataset]. MNIST dataset consists of 28 * 28 * 1 dimension images. The dimensions are for width, height and depth or channels in an image. Depth or channels are red, green, blue for color images. However for grayscale images of MNIST there is one channel. For 2D Conv layers a 4D tensor is expected of format batch, width, height and channels. So we need to add a new axis to images.


  •  
    mnist = tf.keras.datasets.mnist
    
    (x_train, y_train), (x_test, y_test) = mnist.load_data()
    x_train, x_test = x_train / 255.0, x_test / 255.0
    
    # Add a channels dimension
    x_train = x_train[..., tf.newaxis].astype("float32")
    x_test = x_test[..., tf.newaxis].astype("float32")
      
    
  • Use `tf.data` to batch and shuffle the dataset:


  •  
    train_ds = tf.data.Dataset.from_tensor_slices(
        (x_train, y_train)).shuffle(10000).batch(32)
    
    test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32)
      
    
  • Build the `tf.keras` model using the Keras:


  •  
    class MyModel(Model):
      def __init__(self):
        super(MyModel, self).__init__()
        self.conv1 = Conv2D(32, 3, activation='relu')
        self.flatten = Flatten()
        self.d1 = Dense(128, activation='relu')
        self.d2 = Dense(10)
    
      def call(self, x):
        x = self.conv1(x)
        x = self.flatten(x)
        x = self.d1(x)
        return self.d2(x)
    
    # Create an instance of the model
    model = MyModel()
      
    
  • Choose an optimizer and loss function for training:


  •  
    loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
    
    optimizer = tf.keras.optimizers.Adam()
      
    
  • Select metrics to measure the loss and the accuracy of the model. These metrics accumulate the values over epochs and then print the overall result.


  •  
    train_loss = tf.keras.metrics.Mean(name='train_loss')
    train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')
    
    test_loss = tf.keras.metrics.Mean(name='test_loss')
    test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy')
    
    Use `tf.GradientTape` to train the model:
    
    @tf.function
    def train_step(images, labels):
      with tf.GradientTape() as tape:
        # training=True is only needed if there are layers with different
        # behavior during training versus inference (e.g. Dropout).
        predictions = model(images, training=True)
        loss = loss_object(labels, predictions)
      gradients = tape.gradient(loss, model.trainable_variables)
      optimizer.apply_gradients(zip(gradients, model.trainable_variables))
    
      train_loss(loss)
      train_accuracy(labels, predictions)
      
    
  • Test the model:


  •  
    @tf.function
    def test_step(images, labels):
      # training=False is only needed if there are layers with different
      # behavior during training versus inference (e.g. Dropout).
      predictions = model(images, training=False)
      t_loss = loss_object(labels, predictions)
    
      test_loss(t_loss)
      test_accuracy(labels, predictions)
    
    EPOCHS = 5
    
    for epoch in range(EPOCHS):
      # Reset the metrics at the start of the next epoch
      train_loss.reset_states()
      train_accuracy.reset_states()
      test_loss.reset_states()
      test_accuracy.reset_states()
    
      for images, labels in train_ds:
        train_step(images, labels)
    
      for test_images, test_labels in test_ds:
        test_step(test_images, test_labels)
    
      print(
        f'Epoch {epoch + 1}, '
        f'Loss: {train_loss.result()}, '
        f'Accuracy: {train_accuracy.result() * 100}, '
        f'Test Loss: {test_loss.result()}, '
        f'Test Accuracy: {test_accuracy.result() * 100}'
      )
      
    
  • The image classifier is now trained to ~98% accuracy on this dataset.