• By knowing what the human-level performance is, it is possible to tell when a training set is performing well or not.


  • Example: Cat vs Non-Cat


  • surpassing-human-level-performance avoidable-bias


  • In this case, the human level error as a proxy for Bayes error since humans are good to identify images.


  • If you want to improve the performance of the training set but you can’t do better than the Bayes error otherwise the training set is overfitting.


  • By knowing the Bayes error, it is easier to focus on whether bias or variance avoidance tactics will improve the performance of the model.


  • Scenario A There is a 7% gap between the performance of the training set and the human level error. It means that the algorithm isn’t fitting well with the training set since the target is around 1%. To resolve the issue, we use bias reduction technique such as training a bigger neural network or running the training set longer.


  • Scenario B The training set is doing good since there is only a 0.5% difference with the human level error. The difference between the training set and the human level error is called avoidable bias. The focus here is to reduce the variance since the difference between the training error and the development error is 2%.


  • To resolve the issue, we use variance reduction technique such as regularization or have a bigger training set.