Thursday, January 28, 2021
  • Setup menu at Appearance » Menus and assign menu to Top Bar Navigation
Advertisement
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
No Result
View All Result
Home Neural Networks

Testing the waters of Bayesian Neural Networks(BNNs) | by Faraz Gerrard Jamal | Nov, 2020

November 26, 2020
in Neural Networks
Testing the waters of Bayesian Neural Networks(BNNs) | by Faraz Gerrard Jamal | Nov, 2020
585
SHARES
3.3k
VIEWS
Share on FacebookShare on Twitter

Intelligence boils down to two things for me ->

1. Acting when certain/necessary.

You might also like

Using Machine Learning to Recognize Objects on Hand Drawn Wireframes | by Adam Nahirnyj | Jan, 2021

Machine Learning and Generalization Error — VC Bound in Practice | by Naja Møgeltoft | Jan, 2021

4 Reasons Why MongoDB Atlas is great for Machine Learning | by Britton LaRoche | Jan, 2021

2. Not acting/staying pensive when uncertain.

Point (2.) is what we are going to dive in!

Uncertainty is inherent everywhere, nothing is error free. So, it is frankly quite surprising that for most Machine Learning projects, gauging uncertainty isn’t what’s aimed for!

1. How to automatically deskew (straighten) a text image using OpenCV

2. Explanation of YOLO V4 a one stage detector

3. 5 Best Artificial Intelligence Online Courses for Beginners in 2020

4. A Non Mathematical guide to the mathematics behind Machine Learning

As a not-so-real-world example, consider a treatment recommendation algorithm. We are given a patient’s medical data that we feed into our network. For a plain NN(Neural Network), it would just output a single class, say treatment type ‘C’. While for BNNs you would be able to see the whole distribution of the output, and gauge the confidence of your output label based on that. If the standard deviation is low of your output distribution, we are good to go with that output label. Otherwise, chuck it, we need human intervention.

So, BNNs are different from plain neural networks in the sense that their weights are assigned a probability distribution instead of a single value or point estimate. Hence, we can assess the uncertainty in weights to estimate the uncertainty in predictions. If your input parameters are not stable, how can you expect your output to be?! Makes sense, eh.

Artificial Intelligence Jobs

Say you have a parameter, you try to estimate its distribution and using the distribution’s high probability points you estimate the output value of your neural network. By high probability, I mean the more probable points. (Mean is the most probable point in a Normal distribution)

Equation #1

Looking at Equation #1,

On the left hand side, we have the probability distribution of the output classes, which we get after feeding in our data ‘x’ to our model which has been trained on Dataset ‘D’.

On the right hand side, we have an integral. The middle term inside the integral is the posterior, which is a probability distribution of the weights GIVEN we have seen the data.

Think of this integral as an ensemble, where each unique model is identified by a unique set of weights (because different weights means different models). Greater the posterior probability for a unique set of weights(therefore that unique model), greater weight will be given to that prediction. Hence, each model’s respective prediction is weighed by the posterior probability of it’s unique set of weights!

Sounds good, eh? Just search the whole weight space and weigh in the good(high probability) parts. Wondering why people don’t do this? It is because even a simple 5–7 layer NN has around a million weights, so it is just not computationally feasible to construct an analytical solution for the posterior(p(w|D)) in NNs.

So the next step is, we need to approximate the posterior distribution. We cannot get the exact posterior distribution, but we surely can choose another distribution that replicates it to a good extent.

We can do this using a variational distribution whose functional form is known! By ‘functional form is known’, I mean it is one of the standard statistical distributions which can be denoted using just a few parameters, like the Normal distribution( We just need two parameters, i.e. the mean and variance, to denote a Normal distribution). So, we are essentially trying to form a Normal distribution replica of the posterior distribution. Even though the actual posterior distribution might not be a Normal distribution, we are still going to replicate it as well as we can using the variational distribution, which is a Normal distribution in our case. You can pick any standard statistical distribution for the variational distribution.

How do we go about replicating it? I’ll cover that and more in the next article. Hope this works up an appetite for the world of Bayesian Machine Learning! It’s a beautiful topic and one that has got a lot of exploring to do.

So yeah, see you!

Credit: BecomingHuman By: Faraz Gerrard Jamal

Previous Post

ABM Personalization: Marry Brand-Building to Demand Gen

Next Post

10 Ways AI Is Improving Construction Site Security

Related Posts

Using Machine Learning to Recognize Objects on Hand Drawn Wireframes | by Adam Nahirnyj | Jan, 2021
Neural Networks

Using Machine Learning to Recognize Objects on Hand Drawn Wireframes | by Adam Nahirnyj | Jan, 2021

January 28, 2021
Machine Learning and Generalization Error — VC Bound in Practice | by Naja Møgeltoft | Jan, 2021
Neural Networks

Machine Learning and Generalization Error — VC Bound in Practice | by Naja Møgeltoft | Jan, 2021

January 28, 2021
4 Reasons Why MongoDB Atlas is great for Machine Learning | by Britton LaRoche | Jan, 2021
Neural Networks

4 Reasons Why MongoDB Atlas is great for Machine Learning | by Britton LaRoche | Jan, 2021

January 27, 2021
Optimizing Retention through Machine Learning | by Vivek Parashar | Jan, 2021
Neural Networks

Optimizing Retention through Machine Learning | by Vivek Parashar | Jan, 2021

January 27, 2021
Why You’re Using Spotify Wrong. And how you can hijack the… | by Anthony Ranallo | Jan, 2021
Neural Networks

Why You’re Using Spotify Wrong. And how you can hijack the… | by Anthony Ranallo | Jan, 2021

January 27, 2021
Next Post
10 Ways AI Is Improving Construction Site Security

10 Ways AI Is Improving Construction Site Security

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

January 6, 2019
Microsoft, Google Use Artificial Intelligence to Fight Hackers

Microsoft, Google Use Artificial Intelligence to Fight Hackers

January 6, 2019

Categories

  • Artificial Intelligence
  • Big Data
  • Blockchain
  • Crypto News
  • Data Science
  • Digital Marketing
  • Internet Privacy
  • Internet Security
  • Learn to Code
  • Machine Learning
  • Marketing Technology
  • Neural Networks
  • Technology Companies

Don't miss it

Mozilla: Racism, misinformation, anti-worker policies are ‘undermining’ the Internet
Internet Security

Mozilla: Racism, misinformation, anti-worker policies are ‘undermining’ the Internet

January 28, 2021
Using AI and Machine Learning to Make Solar Power More Efficient
Machine Learning

Using AI and Machine Learning to Make Solar Power More Efficient

January 28, 2021
The Future of B2B Marketing: 4 Areas to Focus On
Marketing Technology

The Future of B2B Marketing: 4 Areas to Focus On

January 28, 2021
Google says iOS privacy summaries will arrive when its apps are updated
Internet Security

Google says iOS privacy summaries will arrive when its apps are updated

January 28, 2021
Using the Manager Attribute in Active Directory (AD) for Password Resets
Internet Privacy

Using the Manager Attribute in Active Directory (AD) for Password Resets

January 28, 2021
Applying artificial intelligence to science education — ScienceDaily
Machine Learning

Smart algorithm cleans up images by searching for clues buried in noise — ScienceDaily

January 28, 2021
NikolaNews

NikolaNews.com is an online News Portal which aims to share news about blockchain, AI, Big Data, and Data Privacy and more!

What’s New Here?

  • Mozilla: Racism, misinformation, anti-worker policies are ‘undermining’ the Internet January 28, 2021
  • Using AI and Machine Learning to Make Solar Power More Efficient January 28, 2021
  • The Future of B2B Marketing: 4 Areas to Focus On January 28, 2021
  • Google says iOS privacy summaries will arrive when its apps are updated January 28, 2021

Subscribe to get more!

© 2019 NikolaNews.com - Global Tech Updates

No Result
View All Result
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News

© 2019 NikolaNews.com - Global Tech Updates