What happens when our learning algorithm does not predict well? What can we do? The list of possible adjustments is as large as our creativity level but, according to Andrew Ng, we usually end up doing one or more of these actions: Get more training data Get more features Remove some features Fine tune the regularization But which one is the best option for… Read more →

# machine learning

# Bias and Variance

One of the key aspects of understanding prediction models is understanding the prediction error. It measures how good at predicting the model is and a simple way to compute is simply comparing the predicted values against the real observed counterparts (assuming a supervised learning scenario). But the job does not end with calculating the error because this might be large and hence it would… Read more →

# Cross-Validation Strategies

When you are building a prediction model, let’s say a linear regression to keep it simple, you need to be aware of how good at predicting that model is. A common evauation technique, with its origin in the statistical world, is the evaluation of residuals. Residuals are defined as the difference between the predicted and observed values (remember that we use labeled… Read more →

# Predicting with Labeled data

Imagine that you have to implement a model that predicts handwritten numbers and you choose to do it with a Neural Network. You could just trust your instincts and invent both the number of units per layer and the set of Θ values. Applying the Forward Propagation algorithm would suffice to come up with the prediction. Unfortunately, that model would definitely predict with an uncertain accuracy (just as… Read more →

# Understanding Neural Networks (part 2): Vectorized Forward Propagation

This is the second post in a series where I explain my understanding on how Neural Networks work. I am not an expert on the topic, yet :), but I have been exploring Machine Learning during the last months (check my study list and my exercises of Coursera courses here and here). I think it is a content worth sharing as I… Read more →

# Understanding Neural Networks (part 1): Foundations

This is the first post in a series where I explain my understanding on how Neural Networks work. I am not an expert on the topic, yet :), but I have been exploring Machine Learning during the last months (check my study list and my exercises of Coursera courses here and here). I think it is a content worth sharing as… Read more →

# Learning Machine Learning

Introductory MOOCs Machine Learning Foundations Carlos Guestrin and Emily Fox (University of Washington) Coursera My thoughts on the course and my exercises on Github Machine Learning Andrew Ng (Stanford) Coursera My exercises on Github Extended lectures on youtube Neural Networks (and Deep Learning) Neural Networks and Deep Learning (ebook) A step by step propagation example (post) Getting started with Deep Learning… Read more →