A note :
This article presumes that you are unreasonably fascinated by the mathematical world of deep learning. You want to dive deep into the math of deep learning to know what’s actually going under the hood.
Some Information about this article:
In this article we’ll discuss and implement RNNs from scratch. We’ll then use them to generate text(like poems, c++ code). I am inspired to write this article after reading Andrej Karpathy’s blog on “The Unreasonable Effectiveness of Recurrent Neural Networks”. The text generated by this code is not perfect, but it gives an intuition about how text generation actually works. Our input will be plain text file containing some text(such as shakespeare’s poem) and our program will generate output similar to the input(poem) which may or may not make sense.
Let’s dive into the mathematical world of RNNs.
So What is the basic structure of RNN?
Don’t worry about any of the terms. We’ll discuss each of them. They are pretty easy to understand.
In Fig 1:
h(t): hidden state of RNN at time t=t
fw: non-linearity function(mostly tanh)
Whh: randomly initialized weight matrix. It is used when we move from h to h (hidden state to another hidden state).
Wxh: randomly initialized weight matrix. It is used when we move from ‘x’ to ‘h’ (inputs to hidden states).
Why: randomly initialized weight matrix when we move from ‘h’ to ‘y’ present hidden state to output.
bh(not in the photo): randomly initialized column matrix as bias matrix to be added in calculation of h(t).
by(not in the photo): randomly initialized column matrix as bias matrix to be added in calculation of y(t).
We start by importing data:
download data from here.
char_to_ix: it's a dictionary to assign a unique number to each unique character
ix_to_char:it's a dictionary to assign a unique character to each number.
We deal with assigned number of each character and predict number of next character and then use this predicted number to find the next character.
hidden size: number of hidden neurons
seq_length: this refers to how many previous immediately consecutive states we want our RNN to remember.
lr: stands for learning rate.
Initialise the parameters:
Initialise the parameters we discussed above(Whh …… by).
xs, ys, hs, ps are dictionaries.
1. Natural Language Generation:
The Commercial State of the Art in 2020
2. This Entire Article Was Written by Open AI’s GPT2
3. Learning To Classify Images Without Labels
4. Becoming a Data Scientist, Data Analyst, Financial Analyst and Research Analyst
xs[t]:At time(character) t=t, we use one-hot encoding to represent characters that is all the element of the one-hot vector are zeros except one element and we find location of that element(character) using char_to_ix dictionary. Example: assume that we have data as ‘abcdef’. We represent ‘a’ by using one-hot encoding as
this is what we are doing in 25th,26th line in the code above.
ys[t]: At time(character) t=t,we store the final output of that RNN cell.
hs[t]: At time(character)t=t, we store the hidden state of the present RNN cell.
ps[t]: At time(character)t=t, we store the probability of occurrence of each character.
As you see in the above code, we implemented simple calculations as given in Fig1 for xs[t], ys[t], hs[t], ps[t].
And then finally we calculate the softmax loss.