This article is the beginning of the series on RNN and LSTM and in this one, I’ll go through the basics of recurrent neural network.
The only prerequisite of this article is that you should have some idea about the neural network.
What is a Recurrent Neural Network?
There are very different types of raw data available to us and at the same time we got different types of neural network for these types of data.
In a Neural network, we got an input layer, a bunch of hidden layers and an output layer. Note that these hidden layers are independent of each other.
That means each hidden layer will have their independent Weights(W) and biases(b).
So here comes RNN into the picture when we want to store previous information while processing the new information. This previous information is termed as a memory. In this network, a part of data is made to pass through the same set of parameters. It basically just makes use of sequential data in hand.
The complexity of assigning parameters is greatly reduced.
Where should we use this awesome network???
Well since the main speciality about the recurrent network is the memory, this network works really great on the sequence of data.
Data such as text and audio comes under sequence of data. For example in language processing, to predict the next words, the network should have some knowledge about the previous words. Same goes while processing audio spectrogram.
How does it work?
The above-shown figure is an unrolled recurrent neural network. In this context, unrolling simply means the number of times the part of data is being passed.
For example, if we are using a sentence at its input then the length of the network will be the count of words in that sentence. That is one layer for each word. In that way, it can keep track of it.
- x is part of the data applied to the network. By part of the data, I mean a single word
- o is the output of each network. Suppose we are using this network for the prediction of next work then it will be all the possibility of vocabulary probability provided. Depending on the task, it will not necessary to spit out the results after each intermediate steps.
- s is the memory which is being passed on to the successive network. It can be shown as s(t) = function(W * s (t-1) + U*x(t) ).The functions commonly used for hidden layers are either tanh or ReLU. Keep in mind that it is not possible for the network to hold all the information from times.
Training a Recurrent Network is similar to that of a Neural Network. There is a slight difference in the backpropagation algorithm. Since we are using the same set of parameters for training, we will have to backpropagate through time.
Top 4 Most Popular Ai Articles:
1. AI for CFD: Intro (part 1)
2. Using Artificial Intelligence to detect COVID-19
3. Real vs Fake Tweet Detection using a BERT Transformer Model in few lines of code
4. Machine Learning System Design
This is known as Backpropagation through time(BPTT). I will go deep into it in the successive articles.
Recurrent Neural Network is heavily used in Natural Language Processing domain along with speech recognition. Multiple open source modules specially devoted to language processing has been created in languages. Anyone with a will easily create something and use it on various platforms.
Now I’m going to list down the major application of RNNS.
You must have stumbled across any foreign sentence and translated it using google translator, behind the scene a RRN is at work.
For a given sequence of words in a particular language, each token follows a pipeline architecture and at the end we obtain a translated tokens in appropriate order.
Alexa and Siri are able to understand your voice because of complex speech recognition system behind it.
Our voice is converted into a Spectrogram where the system makes sense out of the highs and lows in frequency to understand it and generate a reply. Amazing isn't it!!?
In this application, Recurrent neural network along with a convolutional network is used to generate a description of a particular image.
It works like this:
- An CNN is used to tag different objects of that image.
- These tagged objects are then passed onto an RNN to generate an appropriate description.
The end result:
“two young girls are playing with lego toy.”
Thank You for taking the time to read this article.
Have a fantastic day 🙂
Don’t forget to give us your 👏 !
Introduction to RNN and LSTM(Part-1) was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.