Thursday, February 25, 2021
  • Setup menu at Appearance » Menus and assign menu to Top Bar Navigation
Advertisement
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
No Result
View All Result
Home Neural Networks

Recommender System Metrics — Clearly Explained | by Chaitanya Belhekar | Aug, 2020

September 1, 2020
in Neural Networks
Recommender System Metrics — Clearly Explained | by Chaitanya Belhekar | Aug, 2020
587
SHARES
3.3k
VIEWS
Share on FacebookShare on Twitter

Let’s say that there are some users and some items, like movies, songs, or products. Each user might be interested in some items. We recommend a few items (the number is k) for each user. Now, how will you find whether our recommendations to every user were efficient?

In a classification problem, we usually use the precision and recall evaluation metrics. Similarly, for recommender systems, we use a mix of precision and recall — Mean Average Precision (MAP) metric, specifically MAP@k, where k recommendations are provided.

You might also like

How to Make Data Annotation More Efficient? | by ByteBridge | Feb, 2021

How Is Machine Learning Revolutionizing Supply Chain Management | by Gina Shaw | Feb, 2021

Statistical Concepts behind A/B Testing | by Sarvagya Dasgupta | Feb, 2021

Let’s explain MAP, so the M is just an average (mean) of APs, average precision, of all users. In other words, we take the mean for average precision, hence Mean Average Precision. If we have 1000 users, we sum APs for each user and divide the sum by 1000. This is the MAP.

So now, what is average precision? Before that let’s understand recall (r)and precision (P).

Precision
Recall

There is usually an inverse relationship between recall and precision. Precision is concerned about how many recommendations are relevant among the provided recommendations. Recall is concerned about how many recommendations are provided among all the relevant recommendations.

Let’s understand the definitions of recall@k and precision@k, assume we are providing 5 recommendations in this order — 1 0 1 0 1, where 1 represents relevant and 0 irrelevant. So the precision@k at different values of k will be precision@3 is 2/3, precision@4 is 2/4, and precision@5 is 3/5. The recall@k would be, recall@3 is 2/3, recall@4 is 2/3, and recall@5 is 3/3.

So we don’t really need to understand average precision (AP). But we need to know this:

  • we can recommend at most k items for each user
  • it is better to submit all k recommendations because we are not penalized for bad guesses
  • order matters, so it’s better to submit more certain recommendations first, followed by recommendations we are less sure about

So basically we select k best recommendations (in order) and that’s it.

1. Machine Learning Concepts Every Data Scientist Should Know

2. AI for CFD: byteLAKE’s approach (part3)

3. AI Fail: To Popularize and Scale Chatbots, We Need Better Data

4. Top 5 Jupyter Widgets to boost your productivity!

Here’s another way to understand average precision. You can think of it this way: you type something in Google and it shows you 10 results. It’s probably best if all of them were relevant. But, if only some are relevant, say five of them, then it’s much better if the relevant ones are shown first. It would be bad if the first five were irrelevant and good ones only started from sixth, wouldn’t it? AP score reflects this. They should have named it ‘order-matters recall’ instead of average precision.

But if you persist, let’s dive down into the math. If we are asked to recommend N items and the number of relevant items in the full space of items is m, then:

Average Precision

where P(k) is the precision value at the kth recommendation, rel(k) is just an indicator that says whether that kth recommendation was relevant (rel(k)=1) or not (rel(k)=0).

Consider, there are 5 relevant recommendations (m), we are making 10 recommendations (N) as follows — 1 0 1 0 0 1 0 0 1 1. Let’s calculate the mean average precision here.

Compare the math with the above formula and you will understand the intuition. Now to proof that MAP takes care of the order, as we mentioned earlier, let’s provide another set of recommendation — 1 1 1 0 1 1 0 0 0 (here we are making relevant recommendations first).

MAP rewards first-loaded relevant recommendations.

To summarize, MAP computes the mean of the Average Precision (AP) over all the users for a recommendation system. The AP is a measure that takes in a ranked list of the k recommendations and compares it to a list of relevant recommendations for the user. AP rewards you for having a lot of relevant recommendations in the list, and rewards you for putting the most relevant recommendations at the top.

References:

Math formula images are taken from the following blog.

Credit: BecomingHuman By: Chaitanya Belhekar

Previous Post

Setting up a locked room at home for confidential work projects

Next Post

Toward a machine learning model that can reason about everyday actions | MIT News

Related Posts

How to Make Data Annotation More Efficient? | by ByteBridge | Feb, 2021
Neural Networks

How to Make Data Annotation More Efficient? | by ByteBridge | Feb, 2021

February 25, 2021
How Is Machine Learning Revolutionizing Supply Chain Management | by Gina Shaw | Feb, 2021
Neural Networks

How Is Machine Learning Revolutionizing Supply Chain Management | by Gina Shaw | Feb, 2021

February 25, 2021
Statistical Concepts behind A/B Testing | by Sarvagya Dasgupta | Feb, 2021
Neural Networks

Statistical Concepts behind A/B Testing | by Sarvagya Dasgupta | Feb, 2021

February 24, 2021
Generating Music Using LSTM Neural Network | by Linan Chen | Jan, 2021
Neural Networks

Generating Music Using LSTM Neural Network | by Linan Chen | Jan, 2021

February 24, 2021
Convolutional Neural Networks with Keras | by Krishnakumar Karancherry
Neural Networks

Convolutional Neural Networks with Keras | by Krishnakumar Karancherry

February 24, 2021
Next Post
Toward a machine learning model that can reason about everyday actions | MIT News

Toward a machine learning model that can reason about everyday actions | MIT News

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

January 6, 2019
Microsoft, Google Use Artificial Intelligence to Fight Hackers

Microsoft, Google Use Artificial Intelligence to Fight Hackers

January 6, 2019

Categories

  • Artificial Intelligence
  • Big Data
  • Blockchain
  • Crypto News
  • Data Science
  • Digital Marketing
  • Internet Privacy
  • Internet Security
  • Learn to Code
  • Machine Learning
  • Marketing Technology
  • Neural Networks
  • Technology Companies

Don't miss it

Machine learning speeding up patent classifications at USPTO
Machine Learning

Machine learning speeding up patent classifications at USPTO

February 25, 2021
How to Make Data Annotation More Efficient? | by ByteBridge | Feb, 2021
Neural Networks

How to Make Data Annotation More Efficient? | by ByteBridge | Feb, 2021

February 25, 2021
How to Nail Virtual and Digital Communication
Marketing Technology

How to Nail Virtual and Digital Communication

February 25, 2021
Google funds Linux kernel developers to work exclusively on security
Internet Security

Google funds Linux kernel developers to work exclusively on security

February 25, 2021
Online Trackers Increasingly Switching to Invasive CNAME Cloaking Technique
Internet Privacy

Online Trackers Increasingly Switching to Invasive CNAME Cloaking Technique

February 25, 2021
Off-chain reporting: Toward a new general purpose secure compute framework by Chainlink
Big Data

Off-chain reporting: Toward a new general purpose secure compute framework by Chainlink

February 25, 2021
NikolaNews

NikolaNews.com is an online News Portal which aims to share news about blockchain, AI, Big Data, and Data Privacy and more!

What’s New Here?

  • Machine learning speeding up patent classifications at USPTO February 25, 2021
  • How to Make Data Annotation More Efficient? | by ByteBridge | Feb, 2021 February 25, 2021
  • How to Nail Virtual and Digital Communication February 25, 2021
  • Google funds Linux kernel developers to work exclusively on security February 25, 2021

Subscribe to get more!

© 2019 NikolaNews.com - Global Tech Updates

No Result
View All Result
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News

© 2019 NikolaNews.com - Global Tech Updates