Thursday, April 15, 2021
  • Setup menu at Appearance » Menus and assign menu to Top Bar Navigation
Advertisement
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
No Result
View All Result
Home Neural Networks

The development of Morality through Genetic Algorithms

March 29, 2019
in Neural Networks
The development of Morality through Genetic Algorithms
586
SHARES
3.3k
VIEWS
Share on FacebookShare on Twitter

Credit: BecomingHuman

Jacques-Louis David – The Death of Socrates, Paris, 1787

Morality is a big deal.

You might also like

Data Labeling Service — How to Get Good Training Data for ML Project? | by ByteBridge | Apr, 2021

How to Enter Your First Zindi Competition | by Davis David

Why I Think That Avengers: Age of Ultron is One of the Best Sci-Fi Movies About A.I | by Brighton Nkomo | Apr, 2021

We are usually born with an innate sense of what’s right and what’s wrong, and we improve the way we think and behave thanks to our peers and from past traditions.

But what is the source of morality? Does it come from a superior being or it’s something that we developed? Is it subjective or objective? Is it relative or absolute? Can a machine have a sense of morality?

This topic has always been extremely important. The way we think about morality itself is at the very base of some of the most important dilemmas of our time. To put things in perspective, even the apparent existence of morality is often used as proof of God (don’t worry, this article is not about theism).

Trending AI Articles:

1. Bursting the Jargon bubbles — Deep Learning

2. How Can We Improve the Quality of Our Data?

3. Machine Learning using Logistic Regression in Python with Code

4. Tutorial: Stereo 3D reconstruction with openCV using an iPhone camera

We are also on the verge of creating computers and machines that will have to make moral choices. Is a self-driving car morally obligated to save a pedestrian if that means endangering the driver? Is it even possible to somehow equip an algorithm with morality?

To do so we would first need to define morality (and no, unfortunately it’s not as simple as “God says so, done”).

To prepare for this article I’ve read books and watched debates. I found particularly interesting the recent discussions between Jordan Peterson and Sam Harris on this very topic. Their vast knowledge on the subject can be very useful to summarize two of the most common competing positions.

From left, Sam Harris and Jordan Peterson

Peterson doesn’t define morality in simple terms. I won’t quote him directly and I’ll try my best to avoid strawmanning him. What I got from his view is that our morality comes from past traditions and stories. Stories, according to Peterson coming mainly from the Judeo-Christian tradition, encode the wisdom of our ancestors and survived until today because they helped us prosper. We have been co-selected together with those stories. He also agrees with Dostoevsky in Crime and Punishment on the idea that utilitarianism and rationalism must be avoided and can only be avoided by believing in a supreme being that can impose judgement.

Sam Harris has a very different approach. He compares morality and behaviour to a chess game: we start with a set of rules and goals and we develop strategies and evaluate each move we make. If a move gets us closer to the goal, then it’s a good move. If we somehow change a rule or the goal, strategies and good moves will change accordingly. The evaluation of every move is not subjective, since it can be measured in a deterministic way. This consequentialistic approach is a very common framework in secular morality. We can start from a goal that can be defined as well-being and, from there, derive some general rules, such as “life is generally preferable to death” (this one is very simple, since death is the absence of being and the first thing to be generally avoided) and so on. Of course, this is usually put together with moral relativism (that is not to be confused with subjective morality), where the context of an action matters (throwing a bucket of icy water on someone is morally wrong for obvious reasons, but if that person is on fire it becomes objectively a good moral action).

My personal opinion is that what Peterson says is very hard to test, and that’s a huge critic since I’m directing it to an evolutionist. I can follow his reasoning, but there is no proof that our moral values really come from the relatively recent stories in religion, compared to several millennia of natural selection. Simple facts such as “killing is wrong” and many other basic moral notions are common even in African tribes and very ancient civilizations. Even dogs and monkeys have a sense of morality, that is vastly more evolved in the latter ones (dogs, unlike monkeys, can’t perceive justice in the quantity of a reward, but only in its presence or absence). We don’t actually come from monkeys, but generally they are closer than us to our common primate ancestors, and that’s a pattern that suggests that morality becomes better with evolution or the development of the brain.

What’s more troubling to me, however, is the very thought that without an imposed moral sense that comes from an external source we can’t be moral at all. That’s one of the main reasons why I’m writing this article. After all, most people have an inner voice, a Socrates’ Daemon that is somehow innate, that comes before the tradition or any education.

I started thinking and doing simulations in my head. Imagine our world 300,000 years ago. Mankind has gradually started to evolve from primates and can probably already feel affection for offspring and peers, as most mammals do. Imagine a population of purely random individuals: some of them cooperate, some of them prefer to stay alone; some of them steal and kill, some of them are compassionate and helpful. Wait some generations and individuals who like to stay alone will most likely die before mating, while individuals who fight in a group or have a role in society are the ones that will survive. Violent individuals who kill without resentment, even if they’re able to somehow survive, will give birth to violent groups that will kill each other and have less chances to propagate their genes.

That’s natural selection.

This was the intuition, now I needed to test it. Can a framework purely based on a goal and a set of rules define a moral individual just with natural selection?

To do this I used a genetic algorithm and decided to model a group of individuals using personality traits. The full code and the simulation itself can be found here. A genetic algorithm is a metaheuristic inspired by the process of natural selection, a perfect fit for my goal.

Base steps of a genetic algorithm

I started with a random group of 200 individuals with completely random personality traits. I decided to use the notorious Big Five, just to give some sense to it, but it doesn’t really matter how I named the labels. Then I defined the fitness function (a function to evaluate the well-being of an individual) by giving each trait a risk factor. There are many studies on the correlation between the Big Five and risk factors, but again, there is no need to give meaning to the labels. However in this case I considered at high risk an individual with low “Conscientiousness” and low “Agreeableness”. I’ve also included the possibility of a death incident that is more likely if those two traits are very low. Finally individuals who die or have low well-being are not going to propagate their genes for future generations.

I let the simulation run for 50 generations and the results confirmed what I already expected. The first generations ware mostly random, with a high standard deviation, a few virtuous individuals and many deaths. Generation by generation, almost only good individuals were able to carry their genes. At the very end, after 50 generations, the final population was composed of 94 individuals with almost zero standard deviation and almost the best possible well-being evaluation. The final members of this small community all have very high Conscientiousness and Agreeableness, and relatively low Neuroticism.

Fitness average, min and max through generations

Of course, I’m not claiming that this can be defined as a proof that morality in our society developed in this way, but somehow confirms that a purely consequential framework for secular morality, as well as a process of natural selection for empathy and social behavior, is indeed possible. In this simulated framework, based just on natural rules and a goal, moral individuals developed in a spontaneous way. It’s like morality itself is a way to reach the common goal of mankind.

What’s also worth mentioning is how the fitness function is purely based on a single individual. In a sense its measure is completely selfish (and that reminds me of the prisoner’s dilemma game). This further confirms (like they even needed any confirmation from me) the findings of many philosophers and mathematicians, from Adam Smith to John Nash: each individual in pursuing his own selfish good is led to achieve the best good of all. Imagine a society without the innate intuition that killing is wrong: would you like to live there?

Don’t forget to give us your 👏 !

Credit: BecomingHuman By: Gaetano Bonofiglio

Previous Post

FireEye debuts Windows Commando VM as Linux Kali rival

Next Post

Basware Leverages Machine Learning to Improve Procurement

Related Posts

Data Labeling Service — How to Get Good Training Data for ML Project? | by ByteBridge | Apr, 2021
Neural Networks

Data Labeling Service — How to Get Good Training Data for ML Project? | by ByteBridge | Apr, 2021

April 14, 2021
How to Enter Your First Zindi Competition | by Davis David
Neural Networks

How to Enter Your First Zindi Competition | by Davis David

April 14, 2021
Why I Think That Avengers: Age of Ultron is One of the Best Sci-Fi Movies About A.I | by Brighton Nkomo | Apr, 2021
Neural Networks

Why I Think That Avengers: Age of Ultron is One of the Best Sci-Fi Movies About A.I | by Brighton Nkomo | Apr, 2021

April 14, 2021
Music and Artificial Intelligence | by Ryan M. Raiker, MBA | Apr, 2021
Neural Networks

Music and Artificial Intelligence | by Ryan M. Raiker, MBA | Apr, 2021

April 13, 2021
BERT Transformers — How Do They Work? | by James Montantes | Apr, 2021
Neural Networks

BERT Transformers — How Do They Work? | by James Montantes | Apr, 2021

April 13, 2021
Next Post
Basware Leverages Machine Learning to Improve Procurement

Basware Leverages Machine Learning to Improve Procurement

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

January 6, 2019
Microsoft, Google Use Artificial Intelligence to Fight Hackers

Microsoft, Google Use Artificial Intelligence to Fight Hackers

January 6, 2019

Categories

  • Artificial Intelligence
  • Big Data
  • Blockchain
  • Crypto News
  • Data Science
  • Digital Marketing
  • Internet Privacy
  • Internet Security
  • Learn to Code
  • Machine Learning
  • Marketing Technology
  • Neural Networks
  • Technology Companies

Don't miss it

Microsoft Defender for Endpoint now protects unmanaged BYO devices
Internet Security

Microsoft Defender for Endpoint now protects unmanaged BYO devices

April 15, 2021
New JavaScript Exploit Can Now Carry Out DDR4 Rowhammer Attacks
Internet Privacy

New JavaScript Exploit Can Now Carry Out DDR4 Rowhammer Attacks

April 15, 2021
Sailthru Announces Machine Learning Features for Improved Lifecycle Optimization
Machine Learning

Sailthru Announces Machine Learning Features for Improved Lifecycle Optimization

April 14, 2021
Data Labeling Service — How to Get Good Training Data for ML Project? | by ByteBridge | Apr, 2021
Neural Networks

Data Labeling Service — How to Get Good Training Data for ML Project? | by ByteBridge | Apr, 2021

April 14, 2021
The Search Engine Land Awards are open: Wednesday’s daily brief
Digital Marketing

The Search Engine Land Awards are open: Wednesday’s daily brief

April 14, 2021
Six courses to build your technology skills in 2021 – IBM Developer
Technology Companies

IBM joins Eclipse Adoptium and offers free certified JDKs with Eclipse OpenJ9 – IBM Developer

April 14, 2021
NikolaNews

NikolaNews.com is an online News Portal which aims to share news about blockchain, AI, Big Data, and Data Privacy and more!

What’s New Here?

  • Microsoft Defender for Endpoint now protects unmanaged BYO devices April 15, 2021
  • New JavaScript Exploit Can Now Carry Out DDR4 Rowhammer Attacks April 15, 2021
  • Sailthru Announces Machine Learning Features for Improved Lifecycle Optimization April 14, 2021
  • Data Labeling Service — How to Get Good Training Data for ML Project? | by ByteBridge | Apr, 2021 April 14, 2021

Subscribe to get more!

© 2019 NikolaNews.com - Global Tech Updates

No Result
View All Result
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News

© 2019 NikolaNews.com - Global Tech Updates