Understanding Neural Networks (part 2): Vectorized Forward Propagation

This is the second post in a series where I explain my understanding on how Neural Networks work. I am not an expert on the topic, yet :), but I have been exploring Machine Learning during the last months (check my study list and my exercises of Coursera courses here and here). I think it is a content worth sharing as I believe it can help those readers in a similar situation. These are the posts that compose the series:

  • Part 1: Foundations
  • Part 2: Forward Propagation Vectorization (this post)
  • Part 3: Gradient computation and Back-propagation (coming soon)

I don’t want this series to be a ‘yasonn’ (yet another series on neural networks) so I will try to transmit my view, my understanding, and how I have perceived how they work.

 

Foundations of NN

This post dives into the first of the two most important techniques of NN: the forward propagation. I assume that you already know the basics of NN and the following topics look familiar to you:

  • How a single unit works
  • How the different units are interconnected
  • The effects of this interconnection in terms of weights and activation functions
  • Why and where in the overall optimization process the cost computation is performed
  • Forward Propagation algorithm as the main tool to compute the cost computation

Otherwise read first the foundations post. Also, even if you already master the NN technique, I still recommend you to read the foundations post because you will understand my view of NNs and it will be easier to follow this second one.

You can see this text as an extension of the ‘Forward Propagation and Cost Computation‘ section in the foundations post. In there I described an overview of the optimization process focused on (1) applying the forward propagation algorithm with each of the training samples to compute h, and (2) compute the cost J using the long formula that uses the obtained h values (and obviously some others).

Forward Propagation and Cost Computation

Forward Propagation and Cost Computation

If the overall process is clear, let’s now focus on how to get the values of h. That is, the forward propagation algorithm.

 

Forward Propagation

Training a NN is usually a process that requires a not negligible computation power. Recall that the cost J is computed in each one of the optimization steps, and for each cost calculation every one of the training samples needs to be forward propagated to compute h. Since training sets are usually large and optimization algorithms might need lots of iterations, it makes sense to find an efficient way to perform a single forward propagation.

And this is where vectorization has something to say. Note that we are not deriving an alternative algortithm to compute h. We will just organize the data in a vectorized shape so we can use advanced libraries that optimize the vector computations. As simple as that.

Or not. While the idea of vectorizing is certainly simple when you think of a single matrix, it gets more complex when you are trying to convert a rather complex structure like a NN into something completely vectorized. Again, it is not a conceptual problem, but at the beginning it is easy to get confused with the large number of matrices in play, and specially their different shapes.

 

Notation

Vectorization requires formality, so let’s start by describing the notation. Note that there is nothing new here but only giving names to elements of the NN already introduced in the foundations post.

  • ai(n): Value of the unit i in layer n
  • zi(n): Internal value in each unit that sums all (weighted) inputs
  • g(z): Activation function (usually the sigmoid function)
  • Θab(n): Weight of the link that connects unit b in layer n to unit a in layer n+1 (note a and b are in the opposite order)
  • Xm: Value of input sample m
  • hk: Value of output prediction k
Notation for Vectorization of the Forward Propagation Algorithm

Notation for Vectorization of the Forward Propagation Algorithm (single unit)

But remember that we are going for a vectorized implementation, so we will be really using a(n), z(n)Θ(n), X and h.

 

Formal Forward Propagation

Finally, this is where everything gets interesting. The diagram below shows a NN with:

  • An input layer (n=1) with M units (each one fed by the corresponding M input values X)
  • Two hidden layers (n=2 and n=3) with L and J units respectively
  • An output layer (n=4) with K units (representing the values of the K predictions h)

The diagram also shows the weights Θ that connect each pair of layers: Θ(1) for weights connecting layer 1 to layer 2Θ(2) connecting layer 2 to layer 3, and Θ(3) connecting layer 3 to layer 4. Note that the index of Θ indicates the origin layer.

 

Forward Propagation with formal notation

Forward Propagation with formal notation

 

Note that I have drawn the elements with different colors in order to differentiate the layers. That is, everything painted blue represents information related to layer 1, everything red to layer 2, and so on. Also see that the Θ weights use another set of three colors because they really link two layers (e.g. pink for Θ(1) with weights connecting layer 1 and 2). Furthermore, observe that the colors are also used to indicate what the Θ indexes represent. For instance, ΘL1(1) represents the weight of the link that connects the unit 1 in layer 1 to unit L in the upcoming layer (2)This will be very useful later on when understanding the matrices involved in the vectorization.

You might have noticed a set of additional units on top of each layer (the dashed circles). These are the bias units and are used to introduce a scalar contribution in the model. They can be seen as a kind of intercept similar to the role of b in a linear equation y=ax+b. If we look at the bias unit from the single unit perspective, we can interpret the bias as a shift to left or right of the activation function (as this stackoverflow answer clearly explains). Note that the values in a bias unit are always 1 (a0(n) = 1) and the real modeling comes from the weight that links that particular unit to those in the next layer (Θi0(n)).

Since everything is introduced, we can start using the diagram to quickly talk about Forward Propagation. Let’s start by analyzing the initial propagation step between the first two layers:

  1. First of all, it is easy to see that the values that compose one X sample feed the NN and set the values in layer 1 (a1(1) to aM(1)).
  2. These values, together with the bias unit a0(1), (1) get weighted by Θ(1), (2) propagate to layer 2, and (3) their summation is set as the value of zi(2) at each unit i in layer 2.
  3. The activation function is applied to each zi(2) and we then obtain ai(2).

Note that the same process is repeated at each propagation step. That is, each ai(2) is weighted by the corresponding value in Θ(2) in order to come up with zi(3) .Then, the activation function is applied in order to obtain ai(3). And the same to get the final ai(4) that in turn correspond to the h values.

As you can see, the only complexity so far is to get used to the new notation with letters, subindices and superindices.

 

Vectorization details

I previously tried to convince you that the vectorization was needed to speed up the calculations. Let’s take another step and convert the entire procedure into a set of matrix-based operations. In order to assist the explanation, follow the description in the following picture with all the vectorization details.

 

Forward Propagation Vectorization Matrices

Forward Propagation Vectorization Matrices

 

As we did before, let’s start by analyzing the initial propagation step between the first two layers:

  1. First of all note how the vector a(1) is constructed: the ‘1’ value from the bias unit a0(1) and the values {X1, …, XM} from the input sample that feeds the NN. It is important to understand the dimensions of the matrices: a(1) has length M+1 because of the bias unit plus the M input samples from X.
  2. Computing z(2) is just a matter of computing the matrix multiplication Θ(1)a(1). Note that this multiplication actually performs both the corresponding weighting of a(1) by Θ(1) and the summation of all the terms. Also observe how the first element of a(1), the ‘1’ from bias unit, multiplies the first column of Θ(1), the bias weights Θi0(1). Again, note that the vector a(1) has length M+1, the matrix Θ(1) has dimensions LxM+1 and hence the vector z(2) has length L.
  3. Finally, compute a(2) by just applying the activation function g(x) to z(2) and introducing the corresponding bias unit. Note that the vector a(2) has length L+1, because of the L-sized z(2) plus the bias unit a0(2).

And that’s it for the first propagation step. Exactly the same matrix operations are applied in the upcoming propagation steps in order to compute the J+1 values of the units of the other hidden unit, a(3) , and the K values of the output units, a(4), that correspond to the K predicted values h. Note that a0(4) is not present in a(4) because this is the output layer and there is not ‘next’ layer to propagate to.

I don’t think it is a complex procedure because it is based on just weighting and propagating values from layer to layer. As I said before, the only complexity comes from fact that when you try to vectorize a complex structure it is too easy to get overwhelmed by indices and matrix dimensions. I encourage you to follow all the matrices details in order to get used to it and get ready for implementing it all.

Finally, want to see the vectorization in action? Check the forward_propagate(X, theta1, theta2) function in the code from my NN exercise.

 

Conclusion

First of all congratulations if you reached to this point. That was a pretty detailed description :)

In this post I tried to provide a clear explanation of the Forward Propagation procedure and a detailed description of a vectorized version.

In the following post I will review with more detail the other great issue in NN technique:

  • The gradient computation (with Backpropagation)

Hope to see you there!

  1 comment for “Understanding Neural Networks (part 2): Vectorized Forward Propagation

Leave a Reply

Your email address will not be published. Required fields are marked *