Thursday, January 28, 2021
  • Setup menu at Appearance » Menus and assign menu to Top Bar Navigation
Advertisement
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
No Result
View All Result
Home Neural Networks

Univariate Linear Regression with mathematics in Python | by Ee Yeo Keat | Jan, 2021

January 12, 2021
in Neural Networks
Univariate Linear Regression with mathematics in Python | by Ee Yeo Keat | Jan, 2021
585
SHARES
3.3k
VIEWS
Share on FacebookShare on Twitter

There are few optimization algorithms for finding a local minimum in regression. Gradient descent is the iterative algorithm that used to optimize the learning. The purpose is to minimize the cost function value. Now, let’s try gradient descent to optimize the cost function with some learning rate. Assuming no regularization take into consideration. Recall that the parameters of our model are the theta, θ values.

The hypothesis can be describe with this typical linear equation:

You might also like

Using Machine Learning to Recognize Objects on Hand Drawn Wireframes | by Adam Nahirnyj | Jan, 2021

Machine Learning and Generalization Error — VC Bound in Practice | by Naja Møgeltoft | Jan, 2021

4 Reasons Why MongoDB Atlas is great for Machine Learning | by Britton LaRoche | Jan, 2021

The cost function is

where m is the amount of data points.

Reminder again, the objective of linear regression is to minimize the cost function. Thus, the goal is to minimize J(θ0, θ1) and we will fit the linear regression parameters with data using gradient descent. There are two main function defined, derivative_J_theta() and gradient_descent() to perform gradient descent algorithm.

Part of notes from lecture course conducted by Andrew Ng.

As shown in the code below, the computation mainly is using the equation express mathematically as above. The derivative_J_theta() function to compute θ0 and θ1. Next, we measure the accuracy or loss of our hypothesis function by using the cost function.

Part of notes from lecture course conducted by Andrew Ng.
def derivative_J_theta(x, y, theta_0, theta_1):

delta_J_theta0 = 0
delta_J_theta1 = 0

for i in range(len(x)):
delta_J_theta0 += (((theta_1 * x[i]) + theta_0) - y[i])
delta_J_theta1 += (1/x.shape[0]) * (((theta_1 * x[i]) + theta_0) - y[i]) * x[i]

temp0 = theta_0 - (learning_rate * ((1/x.shape[0]) * delta_J_theta0) )
temp1 = theta_1 - (learning_rate * ((1/x.shape[0]) * delta_J_theta1) )

return temp0, temp1

def gradient_descent(x, y, learning_rate, starting_theta_0, starting_theta_1, iteration_num):
store_theta_0 = np.empty([iteration_num])
store_theta_1 = np.empty([iteration_num])
# store_j_theta = []

theta_0 = starting_theta_0
theta_1 = starting_theta_1

for i in range(iteration_num):
theta_0, theta_1 = derivative_J_theta(x, y, theta_0, theta_1)
store_theta_0[i] = theta_0
store_theta_1[i] = theta_1
store_j_theta = ((1/2*X.shape[0]) * ( ((theta_1 * X) + theta_0) - Y)**2)
# store_j_theta.append((1/2*X.shape[0]) * ( ((theta_1 * X) + theta_0) - Y)**2)

return theta_0, theta_1, store_theta_0, store_theta_1, store_j_theta

We can now training the model with small iteration number first and observe the result.

x = X
y= Y
learning_rate = 0.01
iteration_num = 10
starting_theta_0 = 0
starting_theta_1 = 0

theta_0, theta_1, store_theta_0, store_theta_1, store_j_theta = gradient_descent(x, y, learning_rate, starting_theta_0, starting_theta_1, iteration_num)

print("m : %f" %theta_0[0])
print("b : %f" %theta_1[0])

The m and b value we will obtain is 3.0219 and 0.6846 respectively.

Let’s plot the line to see how well the hypothesis fit into our data.

plt.scatter(X[:,0],Y[:,0])
plt.plot(X,(theta_1 * X) + theta_0, c='green')
plt.plot(X, X.dot(ne_theta), c='red')
plt.title('r=%f'%ne_theta[0,0])
plt.show()

The green line indicates our prediction, while red line is the normal equation.

Almost there! It seem like more iteration will generate better results. We will increase the iteration number to 100.

iteration_num = 100

The m value now is 3.3572 and b is 0.9560. Plot the graph again.

Great! The new trained parameter θ0 and θ1 are both optimized and almost align with the best fit red line. It indicated our gradient descent algorithm is working well.

From this example, we can understand clearly on the mathematics fundamental behind the univariate linear regression algorithm, and it can be very useful to perform prediction in machine learning applications.

Additional information about the differences between Gradient Descent and Normal Equation are summarized in the short notes.

Part of notes from lecture course conducted by Andrew Ng.

Welcome to discuss more about this simple yet useful analysis technique.

Credit: BecomingHuman By: Ee Yeo Keat

Previous Post

MarTech workshops = real experts, actionable tactics

Next Post

Machine learning accelerates discovery of materials for use in industrial processes

Related Posts

Using Machine Learning to Recognize Objects on Hand Drawn Wireframes | by Adam Nahirnyj | Jan, 2021
Neural Networks

Using Machine Learning to Recognize Objects on Hand Drawn Wireframes | by Adam Nahirnyj | Jan, 2021

January 28, 2021
Machine Learning and Generalization Error — VC Bound in Practice | by Naja Møgeltoft | Jan, 2021
Neural Networks

Machine Learning and Generalization Error — VC Bound in Practice | by Naja Møgeltoft | Jan, 2021

January 28, 2021
4 Reasons Why MongoDB Atlas is great for Machine Learning | by Britton LaRoche | Jan, 2021
Neural Networks

4 Reasons Why MongoDB Atlas is great for Machine Learning | by Britton LaRoche | Jan, 2021

January 27, 2021
Optimizing Retention through Machine Learning | by Vivek Parashar | Jan, 2021
Neural Networks

Optimizing Retention through Machine Learning | by Vivek Parashar | Jan, 2021

January 27, 2021
Why You’re Using Spotify Wrong. And how you can hijack the… | by Anthony Ranallo | Jan, 2021
Neural Networks

Why You’re Using Spotify Wrong. And how you can hijack the… | by Anthony Ranallo | Jan, 2021

January 27, 2021
Next Post
Machine learning accelerates discovery of materials for use in industrial processes

Machine learning accelerates discovery of materials for use in industrial processes

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

January 6, 2019
Microsoft, Google Use Artificial Intelligence to Fight Hackers

Microsoft, Google Use Artificial Intelligence to Fight Hackers

January 6, 2019

Categories

  • Artificial Intelligence
  • Big Data
  • Blockchain
  • Crypto News
  • Data Science
  • Digital Marketing
  • Internet Privacy
  • Internet Security
  • Learn to Code
  • Machine Learning
  • Marketing Technology
  • Neural Networks
  • Technology Companies

Don't miss it

Mozilla: Racism, misinformation, anti-worker policies are ‘undermining’ the Internet
Internet Security

Mozilla: Racism, misinformation, anti-worker policies are ‘undermining’ the Internet

January 28, 2021
Using AI and Machine Learning to Make Solar Power More Efficient
Machine Learning

Using AI and Machine Learning to Make Solar Power More Efficient

January 28, 2021
The Future of B2B Marketing: 4 Areas to Focus On
Marketing Technology

The Future of B2B Marketing: 4 Areas to Focus On

January 28, 2021
Google says iOS privacy summaries will arrive when its apps are updated
Internet Security

Google says iOS privacy summaries will arrive when its apps are updated

January 28, 2021
Using the Manager Attribute in Active Directory (AD) for Password Resets
Internet Privacy

Using the Manager Attribute in Active Directory (AD) for Password Resets

January 28, 2021
Applying artificial intelligence to science education — ScienceDaily
Machine Learning

Smart algorithm cleans up images by searching for clues buried in noise — ScienceDaily

January 28, 2021
NikolaNews

NikolaNews.com is an online News Portal which aims to share news about blockchain, AI, Big Data, and Data Privacy and more!

What’s New Here?

  • Mozilla: Racism, misinformation, anti-worker policies are ‘undermining’ the Internet January 28, 2021
  • Using AI and Machine Learning to Make Solar Power More Efficient January 28, 2021
  • The Future of B2B Marketing: 4 Areas to Focus On January 28, 2021
  • Google says iOS privacy summaries will arrive when its apps are updated January 28, 2021

Subscribe to get more!

© 2019 NikolaNews.com - Global Tech Updates

No Result
View All Result
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News

© 2019 NikolaNews.com - Global Tech Updates