Demis is a British AI researcher and neuroscientist, most popular for being the founder and CEO of Deepmind, an Alphabet subsidiary which made headlines after beating the Go world champion, Lee Sedol, in a five-match game.
A child prodigy, Demis was adept at chess since childhood, and reached the master standard by the age of 13. He studied Computer Science at the University of Cambridge, and represented the college for varsity chess matches. He worked closely with the famous games designer Peter Molyneux at Bullfrog Productions and Lionhead Studios, where he worked as the lead AI programmer. He went on to form his own game studio afterwards, Elixir Studios, which was specialized in publishing strategy games.
While games are the perfect platform for developing and testing AI algorithms quickly and efficiently, ultimately we want to apply these techniques to important real world problems, because the methods we’ve used are general purpose, our hope is that one day they could be extended to help us address some of society’s toughest and most pressing problems — from medical diagnostics, to climate modeling to smart phone assistance. We’re excited to see what we can use this technology to tackle next.
After his career in the video games industry, he went back to academia to obtain a PhD in cognitive neuroscience from University College London, where he co-authored several influential papers on neuroscience and did groundbreaking research on the human brain.
While he was doing his Postdoctoral Research at UCL, he met Shane Legg, who, along with Mustafa Suleyman, would be his cofounders for Deepmind.
Deepmind started with the mission of combining neuroscience with machine learning to form general-purpose algorithms which would work towards building an Artificial General Intelligence (AGI).
Deepmind’s main breakthrough was when they managed to train a computer to play Atari games, using the raw pixels on the screen as an input. Soon after this achievement, Google purchased the company for £400 million. After the Google acquisition, the company made several more achievements, like building AlphaGo, a program that defeated world champion Lee Sedol at the complex game of Go. In fact, this victory was so unlikely that the Go grandmaster, Lee Sedol, was wondering whether he’d beat AlphaGo by 5–0 or 4–1, before being defeated 4–1 by AlphaGo himself.
Currently, Deepmind is focussed on using its AI capabilities to solve the difficult problem of protein-folding. The tool they’re using to do this is called AlphaFold, which has already won prizes in the field.
Currently, Demis continues to work as the CEO of Deepmind, solving complex problems through AI and discussing complex issues like ethics in Artificial Intelligence.
Geoffrey Everest Hinton is a Canadian cognitive psychologist and computer scientist, most noted for his historic work on artificial neural networks. One of his innovations is the popularization of using backpropagation in Neural Networks. He is known for making disruptive innovations like these in the field of Deep Learning. Backpropogation helps in optimizing the internal weights of neural networks by propogating the error at each layer backwards.
According to an interview he did with Andrew Ng, his interest in AI started with an interesting incident. One day in high school, his friend came up to him and told him that the brain uses holograms. This got him thinking about how the human brain would actually work. Coming from a family of scientists, which includes George Boole, Geoffrey himself did a BA in experimental psychology from the University of Cambridge, where he studied physiology, physics, philosophy and psychology.
In science, you can say things that seem crazy, but in the long run they can turn out to be right. We can get really good evidence, and in the end the community will come around.
After his interest in AI peaked, he decided to pursue a PhD in AI at the University of Edinburgh under the guidance of Christopher Longuet-Higgins. Although Geoffrey was very interested to work on neural networks, his ideas were disregarded by the community in the UK.
Geoffrey later moved to the US, in pursuit of a career in the field of AI. It was while working as a Professor in Carnegie Mellon University that he did his seminal research on the application of backpropagation algorithm to multi-layer neural networks.
Another innovation that Hinton is known for, is Boltzmann Machines. It’s a learning algorithm applied to big, densely connected nets, where only a few nodes are visible to the “outside world”. It’s one of the algorithms used to learn hidden representations from data.
In 2001, he was made a Fellow of the Royal Society for his work in Artificial Neural Networks, because of his research in comparing the effects of brain damage to that of losses in neural networks. In 2018, he was given the order of Canada for contributions in the field of AI, and also made a Fellow of the Canadian Royal Society.
In 2012, he decided to create an AI startup with two of his students while working at the University of Toronto, called DNNresearch Inc. This startup was quickly picked up by Google, cementing and boosting Hinton’s research in AI through funding support.
Since 2013 he divides his time working for Google (Google Brain) and the University of Toronto. In 2017, he cofounded and became the Chief Scientific Advisor of the Vector Institute in Toronto.
Hinton was awarded the 2018 Turing Prizealongside Yoshua Bengio and Yann LeCun for their work on deep learning. Hinton — together with Yoshua Bengio and Yann LeCun — are referred to as the “Godfathers of AI” and “Godfathers of Deep Learning”.
Yann LeCun is a French-American computer scientist working primarily in the fields of computer vision, deep learning, and computational neuroscience. He is most famous for having invented Convolutional Neural Networks, and his immense contributions in the field of Computer Vision. He was also one of the contributors, along with Geoffrey Hinton, responsible for building the widely used backpropagation algorithm.
Born in Paris in the 60’s, Yann was always interested in learning about intelligence. While completing his engineering diploma in ESIEE Paris, he read an article on perceptrons, which piqued his curiosity and made him more eager to learn about them.
He did a PhD in Computer Science from Université Pierre et Marie Curie in 1987 during which he proposed an early form of the backpropagation learning algorithm for neural networks. This paper was read by Geoffrey Hinton, and so, Yann got a chance to work with him in a postdoc position at the University of Toronto.
Our intelligence is what makes us human, and AI is an extension of that quality.
While at UToronto, he was starting with to think of the basics of how Neural Networks can be applied on images, which would lead to the basics of Convolutional Neural Networks. In 1988, he joined the Adaptive Systems Research Department at AT&T Bell Laboratories. After joining, his first task was to design a new Optical Character Recognition system. He used his ideas to design an algorithm which would provide better results than existing ones. This was the first version of a Convolutional Neural Network (CNN).
Since there were no standardized programming environments or work stations in the 80’s, LeCun, along with his friend Leon Bottou started writing a software system called SN to experiment with machine learning and neural networks. It was built around a home-grown Lisp interpreter that eventually morphed into the Lush language. Most of his Neural Network experiments, including the ones at AT&T, were done on this system.
The bank check recognition system that he helped develop at AT&T was widely deployed by NCR and other companies, reading over 10% of all the checks in the US in the late 1990s and early 2000s.
1. MS or Startup Job — Which way to go to build a career in Deep Learning?
2. TOP 100 medium articles related with Artificial Intelligence
3. Neural networks for algorithmic trading. Multimodal and multitask deep learning
4. Back-Propagation is very simple. Who made it Complicated ?
In 1996, after the breakup of AT&T into 3 companies, he joined AT&T Labs-Research as head of the Image Processing Research Department. Foreseeing the sudden availability of a large amount of document/image information with the spread of internet, LeCun worked on building an image & document compression algorithm, called the DjVu. This algorithm is used by many websites, notably the Internet Archive, to distribute scanned documents.
He is the Silver Professor of the Courant Institute of Mathematical Sciences at New York University, and Vice President, Chief AI Scientist at Facebook. LeCun — together with Geoffrey Hinton and Yoshua Bengio — are referred to by some as the “Godfathers of AI” and “Godfathers of Deep Learning”.