Deep Neural Networks
Deep Neural Networks (DNNs) are crucial in understanding and applying deep learning, especially in the field of computer vision.
Deep L-layer Neural Networks
For some general rules & notation:
- Layer Counting: In a neural network, we count only the hidden and output layers, not the input layer because there's no parameters (bias and weights). For example, Logistic Regression, without parameters in its input layer, is considered a 1-layer neural network.
- Network Types:
- Shallow Neural Networks have 1 or 2 layers.
- Deep Neural Networks have 3 or more layers.
- Key Terms:
- : Total number of layers.
- : Total training examples.
- : Neurons in layer .
- : Neurons in the input layer, equal to the size of the input features .
- : Neurons in the output layer, typically 1 in binary classification, representing , the size of the output vector.
- : Activation function in layer .
- Activation: , indicating activations in layer .
- Weights: for layer used for .
- Data Representation:
- Input Data and Output Data are represented as matrices of input and output vectors respectively.
- and : The first input and output vectors in the dataset.
- Vector and Matrix Dimensions:
- Understanding and maintaining correct dimensions for vectors like , , weights , and biases is essential.
- is of shape
- is of shape
- List of different shaped based on the number of neurons on the previous and the current layer. Shape is
- List of different shaped based on the number of neurons on the current layer. Shape is
Forward Propagation in Deep Networks
The general rule in forward propagation for one training input would look like this:
For training inputs, adjustments are made to accommodate all the training examples. It's important to note that we can't compute the whole layers forward propagation without a for loop, so it may be unavoidable. However, the dimensions of the matrices are really important to figure out:
In terms of dimensions, we have:
- has a shape of
- has a shape of
- has a shape of
- has a shape of
Getting Matrix Dimensions Right
Take the shallow NN from Shallow Neural Networks:
Using a pencil and paper approach can be the best way to ensure the dimensions of matrices like , , and their derivatives align correctly. For example, represent this first layer as:
- The dimension of is i.e. You have a weight for every every neuron for each input feature / neuron in the previous layer.
- The dimensions of is i.e. Each Neuron in that layer has its own weight.
- Make sure derivatives are of the same dimension aswell.
- should have the same shape as i.e , while is the same shape as i.e
Dimension of , and is
Why Deep Representations?
Deep NN makes relations from input data to outputs that can range from simple to complex. Each layer of a DNN forms a connection to the previous layer, with deeper layers computing increasingly complex features of the input:
- Face recognition application:
- Image → Edges → Face parts → Faces → Desired face
- Audio recognition application:
- Audio → Low level sound features like (sss,bb) → Phonemes → Words → Sentences
This progression is similar to the way human brains process information, moving from simple to complex interpretations.
Circuit Theory and Deep Learning
Deep learning and circuit theory share an interesting connection. Informally said, certain functions can be computed with a small L-layer deep neural network, for which we would require exponentially more hidden units if attempted with shallower networks.
Deep vs. Shallow Networks
The connection between deep learning and circuit theory provides fascinating insights. Certain functions, like XOR operations on a set of input features, can be computed more efficiently with a deep neural network as compared to a shallower one. A deep network can accomplish this with a number of units that increases logarithmically with the number of inputs, making it significantly more efficient than shallow networks. Shallow networks, in contrast, require an exponentially large number of hidden units to compute the same function.
Consider a function that is the result of XOR operations on a set of input features :
A deep neural network with multiple layers can compute such functions efficiently, often with a number of units that grows only logarithmically with :
In contrast, a shallow network with only one or two layers would require an exponentially large number of hidden units to compute the same function, on the order of .
In a shallow network, each XOR computation would need to be connected to every possible combination of inputs. For 8 inputs, this results in unique XOR gates if we were to follow the structure where each XOR gate takes a unique combination of inputs.
The implication is that deeper architectures can represent complex functions more compactly than shallow ones. For certain types of computations, particularly those that can be decomposed into hierarchical patterns or features, deep neural networks have a significant advantage. This difference is a critical reason why deep learning excels in tasks that involve complex, hierarchical data structures, such as image and speech recognition.
Building blocks of deep neural networks
Here is a schematic for what happens during forward and backward propagation. The forward propagation is used to calculate the cost function. The backward propagation is used to calculate the gradients of the cost function.
With the derivatives calculated, we can update w and b:
Pseudo code for forward propagation for layer :
def forward_propagation(A_prev, W, b, g):
"""
Perform forward propagation for one layer of a neural network.
Parameters:
A_prev (numpy.ndarray): Activations from the previous layer
W (numpy.ndarray): Weight matrix for the current layer
B (numpy.ndarray): Bias vector for the current layer
g (function): Activation function for the current layer
Returns:
A (numpy.ndarray): Output activations of the current layer
cache (tuple): Tuple containing (Z, A_prev, W, b) for use in backpropagation
"""
Z = np.dot(W, A_prev) + b
A = g(Z)
cache = (Z, A_prev, W, b)
return A, cache
For pseudo code for back propagation for layer . Each activation has a different derivative. Therefore, during backpropagation you need to know which activation was used in the forward propagation to be able to compute the correct derivative.
def backward_propagation(dA, cache, g_prime, m):
"""
Perform backward propagation for one layer of a neural network.
Parameters:
dA (numpy.ndarray): Gradient of the activation function from the next layer
cache (tuple): Cached data from forward propagation (Z, A_prev, W, b)
g_prime (function): Derivative of the activation function for the current layer
m (int): Number of training examples
Returns:
dA_prev (numpy.ndarray): Gradient of the activation function for the previous layer
dW (numpy.ndarray): Gradient of the weight matrix for the current layer
db (numpy.ndarray): Gradient of the bias vector for the current layer
"""
Z, A_prev, W, _ = cache
dZ = dA * g_prime(Z)
dW = np.dot(dZ, A_prev.T) / m
db = np.sum(dZ, axis=1, keepdims=True) / m
dA_prev = np.dot(W.T, dZ)
return dA_prev, dW, db
If we have used our loss function then, then the Gradient of Loss Function:
def compute_gradient_loss(y, a):
"""
Compute the gradient of the loss function.
Parameters:
y (numpy.ndarray): True labels
a (numpy.ndarray): Predicted output from the last layer of the network
Returns:
dA (numpy.ndarray): Gradient of the loss function
"""
dA = -(np.divide(y, a) - np.divide(1 - y, 1 - a))
return dA
Parameters vs Hyperparameters
Being able to organize your hyper-parameters well will help you be more efficient in developing your networks. The main parameters of a Neural Network are weights and biases . Hyper parameters (parameters that control the algorithm) are:
- Learning rate ()
- Number of iterations
- Number of hidden layers
- Number of hidden units
- Choice of activation functions.
- is a hyperparameter that you can tune using a dev-set if you have a regularization term in your cost function.
- Other Hyperparameters explored later include the momentum term, mini batch size, various forms of regularization parameters etc.
The workflow usually follows an empirical Approach, experimenting with hyperparameters, However, there are systematic approaches available.