Saturday, April 17, 2021
  • Setup menu at Appearance » Menus and assign menu to Top Bar Navigation
Advertisement
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
No Result
View All Result
Home Machine Learning

How to build ethical AI

January 18, 2020
in Machine Learning
How to build ethical AI
587
SHARES
3.3k
VIEWS
Share on FacebookShare on Twitter

You’ve likely already encountered artificial intelligence several times today. It’s an increasingly common technology, in cars, TVs, and, of course, our phones. But for most people, the term AI still conjures images of The Terminator.

We don’t need to worry about hulking armed robots terrorizing American cities, but there are serious ethical and societal issues we must confront quickly — because the next wave of computing power is coming, with the potential to dramatically alter — and improve — the human experience.

You might also like

Teslafan, a Blockchain-Powered Machine Learning Technology Project, Receives Investment Prior to the ICO

Machine learning models may predict criminal offenses related to psychiatric disorders

How To Ensure Your Machine Learning Models Aren’t Fooled

Full disclosure: I am general counsel and chair of the AI Ethics Working Group at a company that is bringing AI to processor technology in trillions of devices to make them smarter and more trustworthy.

Enabled by high-speed wireless capacity and rapid advances in machine learning, new applications for artificial intelligence are created every day. For technologists, it’s an exciting new frontier. But for the rest of us, we’re right to ask a few questions. In order to realize the full benefits of artificial intelligence, people must have trust in it.

Governments across the world have started to explore these questions. The United States recently unveiled a set of regulatory principles for AI at the annual Consumer Electronics Show in Las Vegas. And U.S. Chief Technology Officer Michael Kratsios spoke on the CES stage about the importance of building trust in AI. But what does that really mean?

AI is already here in ways that many don’t even realize, from how we get our news, to how we combat cyberattacks, to the way cell phone cameras sharpen our selfies. Eventually, AI will enable life-saving medical breakthroughs, more sustainable agriculture, and the autonomous movement of people and products. But to get there we must first tackle important, societal issues related to bias, transparency, and the massive amounts of data that feeds AI. Citizens must be able to trust that AI is being implemented appropriately in all of these areas.

As search and social media companies are facing a so-called “techlash” around the world, we should learn the lessons from today’s privacy debate and grapple with these issues on the front-end, before AI becomes fully rooted. 

We need to adopt a high set of standards of behavior that promote trust and ensure ethics are built into the core of the technology in the same way security and data privacy drive our engineering today.

Any standard-setting in this field must be rooted in the understanding that data is the lifeblood of AI. The continual input of information is what fuels machine learning, and the most powerful AI tools require massive amounts of it. This of course raises issues of how that data is being collected, how it is being used, and how it is being safeguarded.

One of the most difficult questions we must address is how to overcome bias, particularly the unintentional kind. Let’s consider one potential application for AI: criminal justice. By removing prejudices that contribute to racial and demographic disparities, we can create systems that produce more uniform sentencing standards. Yet, programming such a system still requires weighting countless factors to determine appropriate outcomes. It is a human who must program the AI, and a person’s worldview will shape how they program machines to learn. That’s just one reason why enterprises developing AI must consider workforce diversity and put in place best practices and control for both intentional and inherent bias.

This leads back to transparency.

A computer can make a highly complex decision in an instant, but will we have confidence that it’s making a just one?

Whether a machine is determining a jail sentence, or approving a loan, or deciding who is admitted to a college, how do we explain how those choices were made? And how do we make sure the factors that went into that algorithm are understandable for the average person?

If we are going to give machines the ability to make life-changing decisions, we must put in place structures to pull back the curtain and reveal the decision-making behind the outcomes, providing transparency and reassurance.

The reality is that if the private sector doesn’t address these issues now, a government eventually will. But with the rapid rate of innovation in machine learning, regulation will always have a hard time keeping pace. That’s why companies and non-profit enterprises must take the lead by setting high standards that promote trust and ensuring that their staff complete mandatory professional training in the field of AI ethics. It is essential that anyone working in this field has a solid foundation on these high-stakes issues.

We are a far way off from machines learning the way a human does, but AI is already contributing to society. And in all its forms, AI has the potential to positively augment the human experience and contribute to an unprecedented level of prosperity and productivity. But the development and adoption of AI must be done right, and it must be built on a foundation of trust.

Carolyn Herzog (@CHerzog0205) is EVP, General Counsel and Chief Compliance Officer at Arm — a tech company that brings AI to processor technology — where she leads the company’s ethical AI initiatives and serves as Chair of the AI Ethics Working Group.


Credit: Google News

Previous Post

WordPress plugin vulnerability can be exploited for total website takeover

Next Post

Business Analytics vs Data Analytics in One Picture

Related Posts

Teslafan, a Blockchain-Powered Machine Learning Technology Project, Receives Investment Prior to the ICO
Machine Learning

Teslafan, a Blockchain-Powered Machine Learning Technology Project, Receives Investment Prior to the ICO

April 17, 2021
Machine learning approach identifies more than 400 genes tied to schizophrenia
Machine Learning

Machine learning models may predict criminal offenses related to psychiatric disorders

April 16, 2021
How To Ensure Your Machine Learning Models Aren’t Fooled
Machine Learning

How To Ensure Your Machine Learning Models Aren’t Fooled

April 16, 2021
Scientists use machine learning to classify millions of new galaxies
Machine Learning

Scientists use machine learning to classify millions of new galaxies

April 16, 2021
Machine learning algorithm predicts risk for suicide attempt
Machine Learning

New machine learning method for designing more effective antibody drugs

April 16, 2021
Next Post
Business Analytics vs Data Analytics in One Picture

Business Analytics vs Data Analytics in One Picture

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

January 6, 2019
Microsoft, Google Use Artificial Intelligence to Fight Hackers

Microsoft, Google Use Artificial Intelligence to Fight Hackers

January 6, 2019

Categories

  • Artificial Intelligence
  • Big Data
  • Blockchain
  • Crypto News
  • Data Science
  • Digital Marketing
  • Internet Privacy
  • Internet Security
  • Learn to Code
  • Machine Learning
  • Marketing Technology
  • Neural Networks
  • Technology Companies

Don't miss it

Teslafan, a Blockchain-Powered Machine Learning Technology Project, Receives Investment Prior to the ICO
Machine Learning

Teslafan, a Blockchain-Powered Machine Learning Technology Project, Receives Investment Prior to the ICO

April 17, 2021
The “Blue Brain” Project-A mission to build a simulated Brain | by The A.I. Thing | Mar, 2021
Neural Networks

The “Blue Brain” Project-A mission to build a simulated Brain | by The A.I. Thing | Mar, 2021

April 17, 2021
A new collective to fight adtech fraud: Friday’s daily brief
Digital Marketing

A new collective to fight adtech fraud: Friday’s daily brief

April 17, 2021
Cyberattack on UK university knocks out online learning, Teams and Zoom
Internet Security

Cyberattack on UK university knocks out online learning, Teams and Zoom

April 17, 2021
SBI Sumishin Net Bank partners with DLT Labs on supply chain financing network
Blockchain

SBI Sumishin Net Bank partners with DLT Labs on supply chain financing network

April 16, 2021
Machine learning approach identifies more than 400 genes tied to schizophrenia
Machine Learning

Machine learning models may predict criminal offenses related to psychiatric disorders

April 16, 2021
NikolaNews

NikolaNews.com is an online News Portal which aims to share news about blockchain, AI, Big Data, and Data Privacy and more!

What’s New Here?

  • Teslafan, a Blockchain-Powered Machine Learning Technology Project, Receives Investment Prior to the ICO April 17, 2021
  • The “Blue Brain” Project-A mission to build a simulated Brain | by The A.I. Thing | Mar, 2021 April 17, 2021
  • A new collective to fight adtech fraud: Friday’s daily brief April 17, 2021
  • Cyberattack on UK university knocks out online learning, Teams and Zoom April 17, 2021

Subscribe to get more!

© 2019 NikolaNews.com - Global Tech Updates

No Result
View All Result
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News

© 2019 NikolaNews.com - Global Tech Updates