What happens when our learning algorithm does not predict well? What can we do? The list of possible adjustments is as large as our creativity level but, according to Andrew Ng, we usually end up doing one or more of these actions: Get more training data Get more features Remove some features Fine tune the regularization But which one is the best option for… Read more →

# Tag Archive for validation

# Cross-Validation Strategies

When you are building a prediction model, let’s say a linear regression to keep it simple, you need to be aware of how good at predicting that model is. A common evauation technique, with its origin in the statistical world, is the evaluation of residuals. Residuals are defined as the difference between the predicted and observed values (remember that we use labeled… Read more →

# Predicting with Labeled data

Imagine that you have to implement a model that predicts handwritten numbers and you choose to do it with a Neural Network. You could just trust your instincts and invent both the number of units per layer and the set of Θ values. Applying the Forward Propagation algorithm would suffice to come up with the prediction. Unfortunately, that model would definitely predict with an uncertain accuracy (just as… Read more →