• The TensorFlow tutorials are written as Jupyter notebooks and run directly in Google Colab—a hosted notebook environment that requires no setup. Click the Run in Google Colab button.


  • Colab link - Open colab
  • We will install tensorflow and build a neural network from it


  •   
    import tensorflow as tf
    
      
    
  • We then load MNIST dataset which contains 28 * 28 * 1 pixel images of numbers 0- 9. For image data it is standard procedure to scale by dividing each pixel with 255.


  •  
    mnist = tf.keras.datasets.mnist
    
    (x_train, y_train), (x_test, y_test) = mnist.load_data()
    x_train, x_test = x_train / 255.0, x_test / 255.0
      
    
  • For a neural network, we create a sequential object and add layers to it. Flatten layer converts 28*28*1 to 784*1. this can be given to Dense layers and finally softmax output layer


  •  
    model = tf.keras.models.Sequential([
      tf.keras.layers.Flatten(input_shape=(28, 28)),
      tf.keras.layers.Dense(128, activation='relu'),
      tf.keras.layers.Dropout(0.2),
      tf.keras.layers.Dense(10)
    ])
      
    
  • The models returns log-odds or logits for each class.


  •  
    predictions = model(x_train[:1]).numpy()
    predictions
      
    
  • The tf.nn.softmax function converts these logits to "probabilities" for each class:


  •  
    tf.nn.softmax(predictions).numpy()
      
    
  • Note: It is possible to add this `tf.nn.softmax` in as the activation function for the last layer of the network. While this can make the model output more directly interpretable, this approach is discouraged as it's impossible to provide an exact and numerically stable loss calculation for all models when using a softmax output.


  • The `losses.SparseCategoricalCrossentropy` loss takes a vector of logits and a `True` index and returns a scalar loss for each example.


  •  
    loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
      
    
  • This loss is equal to the negative log probability of the true class:


  • It is zero if the model is sure of the correct class.


  •  
    loss_fn(y_train[:1], predictions).numpy()
    
    
    
    model.compile(optimizer='adam',
                  loss=loss_fn,
                  metrics=['accuracy'])
      
    
  • The `Model.fit` method adjusts the model parameters to minimize the loss:


  •  
    model.fit(x_train, y_train, epochs=5)
      
    
  • The `Model.evaluate` method checks the models performance, usually on a "[Validation-set] or "[Test-set]".


  •  
    model.evaluate(x_test,  y_test, verbose=2)
      
    
  • The image classifier is now trained to ~98% accuracy on this dataset.


  • If you want your model to return a probability, you can wrap the trained model, and attach the softmax to it:


  •  
    probability_model = tf.keras.Sequential([
      model,
      tf.keras.layers.Softmax()
    ])
    
    
    probability_model(x_test[:5])