Tuesday, March 2, 2021
  • Setup menu at Appearance » Menus and assign menu to Top Bar Navigation
Advertisement
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
No Result
View All Result
Home Machine Learning

Untangling How Artificial Intelligence Thinks Can Shed Light on Human Notions | UCSB

February 17, 2020
in Machine Learning
Untangling How Artificial Intelligence Thinks Can Shed Light on Human Notions | UCSB
585
SHARES
3.3k
VIEWS
Share on FacebookShare on Twitter

By Harrison Tasoff for UCSB | February 16, 2020 | 9:00 a.m.

You might also like

The case for Bayesian Learning in mining

Government trialling machine learning tech to detect pests at shipping ports

Ask the Expert: What’s New in Azure Machine Learning | Ask the Expert

Bunches of yellow curved shapes are immediately recognizable as bananas. Yet the AI also seems to take cues from the presence of people in pictures of bananas when identifying the fruit. Fabian Offert

Found in everything from self-driving cars to machine translation, artificial neural networks are currently one of the hottest fields in machine learning.

Now there’s a growing interest in unraveling how these brain-like systems think, and it is providing unexpected insights into our own way of understanding the world.

Fabian Offert, a doctoral student in UC Santa Barbara’s Media Arts and Technology graduate program, has brought a scholar’s perspective to this field so often dominated by scientists and engineers.

Before joining UCSB, Offert served as a curator at the ZKM | Center for Art and Media in Karlsruhe, Germany. He realized there that this work on neural networks provided a unique opportunity to explore artistic and philosophical concepts.

“Their perspectives might be fundamentally irreconcilable with our own, but I think that’s exactly why this is so interesting, and why people, specifically in the humanities — who deal with interpretation, perception, abstraction and representation — should look at these things,” said Offert.

Artificial neural networks take inspiration from their biological counterparts. As in a human brain, data streams into the system, where it’s processed by layers of interconnected functions called neurons. Each neuron looks for a particular mixture of features, with those in earlier layers generally picking out lower-level features — like shapes, patterns and colors.

Higher layers respond to combinations and relationships between the more basic elements. A long, vertical object between two circles might trigger a set of neurons that recognize faces, for instance. The last layers provide high-level classifications, whereupon the program spits out a result.

Scientists and engineers are keen to understand the processes at work in these neural nets because they often perceive the world in different ways than we do. And in some cases it’s important to know exactly why the network came to a certain conclusion. For instance, when predicting recidivism rates or credit approvals.

To peer inside these black boxes, Offert uses an approach called feature visualization. He takes a neural net trained to classify images and feeds it random noise. But instead of running the entire program, he stops on a particular neuron or layer of interest and observes how activated it is. This tells him how far off the noise is from an optimal image for this layer.

He then back-propagates the result, tweaks the input and repeats the process until the activation levels plateau. The results provide a surreal approximation of what that particular neuron or layer is really looking for.

Sometimes the features correspond with our own experiences. For instance, a neuron activated by images of bees appears to zero-in on alternating yellow and dark stripes. However, sorting out pictures with umbrellas proves more nuanced. This neuron seems to focus on drooping shapes, but is also activated by figures and cool colors.

Some image classes are more difficult to classify, so the ambiguity persists even in the higher levels. “So higher levels won’t just represent one thing, but often a mixture of different things,” Offert said. “And they will serve a mixture of different functions.”

By embracing these ambiguous images, Offert argues that scholars can use them to discover properties of images that may not be as intuitive.

In fact, “a single neuron is maybe not the right metric for human interpretability, which is interesting, because it means that, as an artificial perception, the neural network has a very different perspective on the world than we have,” Offert suggested.

For example, Offert has looked at the work done by UC Davis computer scientist Gabriel Goh. In 2016, Goh decided to use feature visualization on a neural network that classifies images based on whether or not they’re explicit. As Offert notes, the resulting images are definitely not safe for work, but it’s hard to determine exactly why.

However, Offert was far more interested in the minimally activating images — the most safe-for-work images the system could generate. Since explicit imagery doesn’t have a perfect counterpart, he expected to see just noise. However, a clear trend emerged among these SFW images: Most of them look like cliffs or dams.

Perplexing though this may be, Goh has developed a hypothesis that Offert believes is likely true. Goh thinks that the programmers who trained the network probably used images of cliffs as negative examples to help the system work better.

Cases like this illustrate how fruitful the topic of neural network interpretability can be for scholars in the humanities. “This technique tells you the weird and strange perspective the machine has on the world, but it also tells you the perspective of the people who built the machine, and what they wanted to do with it,” Offert said.

It also forces us to re-examine concepts like representation, abstraction and even the notion of an image itself. But unlike in more traditional musings, in this case these concepts have concrete, technical applications. The bizarre categories that neural networks create actually function and produce a coherent output.

“People in the humanities should be interested in the strange notions of representation that you can extract from these techniques,” Offert said.


Credit: Google News

Previous Post

Nedbank says 1.7 million customers impacted by breach at third-party provider

Next Post

Why Tesla Is Secretly Thrilled With Trump’s Controversial Budget

Related Posts

The case for Bayesian Learning in mining
Machine Learning

The case for Bayesian Learning in mining

March 2, 2021
Government trialling machine learning tech to detect pests at shipping ports
Machine Learning

Government trialling machine learning tech to detect pests at shipping ports

March 2, 2021
Ask the Expert: What’s New in Azure Machine Learning | Ask the Expert
Machine Learning

Ask the Expert: What’s New in Azure Machine Learning | Ask the Expert

March 2, 2021
Machine Learning Cuts Through the Noise of Quantum Computing
Machine Learning

Machine Learning Cuts Through the Noise of Quantum Computing

March 2, 2021
Novel machine-learning tool can predict PRRSV outbreaks and biosecurity effectiveness
Machine Learning

Novel machine-learning tool can predict PRRSV outbreaks and biosecurity effectiveness

March 1, 2021
Next Post
Why Tesla Is Secretly Thrilled With Trump’s Controversial Budget

Why Tesla Is Secretly Thrilled With Trump’s Controversial Budget

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

January 6, 2019
Microsoft, Google Use Artificial Intelligence to Fight Hackers

Microsoft, Google Use Artificial Intelligence to Fight Hackers

January 6, 2019

Categories

  • Artificial Intelligence
  • Big Data
  • Blockchain
  • Crypto News
  • Data Science
  • Digital Marketing
  • Internet Privacy
  • Internet Security
  • Learn to Code
  • Machine Learning
  • Marketing Technology
  • Neural Networks
  • Technology Companies

Don't miss it

Australia’s new ‘hacking’ powers considered too wide-ranging and coercive by OAIC
Internet Security

Australia’s new ‘hacking’ powers considered too wide-ranging and coercive by OAIC

March 2, 2021
DSC Weekly Digest 01 March 2021
Data Science

DSC Weekly Digest 01 March 2021

March 2, 2021
The case for Bayesian Learning in mining
Machine Learning

The case for Bayesian Learning in mining

March 2, 2021
Scientists have built this ultrafast laser-powered random number generator
Internet Security

Scientists have built this ultrafast laser-powered random number generator

March 2, 2021
Companies in the Global Data Science Platforms Resorting to Product Innovation to Stay Ahead in the Game
Data Science

Companies in the Global Data Science Platforms Resorting to Product Innovation to Stay Ahead in the Game

March 2, 2021
Aries becomes next Hyperledger project graduating to active status
Blockchain

Aries becomes next Hyperledger project graduating to active status

March 2, 2021
NikolaNews

NikolaNews.com is an online News Portal which aims to share news about blockchain, AI, Big Data, and Data Privacy and more!

What’s New Here?

  • Australia’s new ‘hacking’ powers considered too wide-ranging and coercive by OAIC March 2, 2021
  • DSC Weekly Digest 01 March 2021 March 2, 2021
  • The case for Bayesian Learning in mining March 2, 2021
  • Scientists have built this ultrafast laser-powered random number generator March 2, 2021

Subscribe to get more!

© 2019 NikolaNews.com - Global Tech Updates

No Result
View All Result
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News

© 2019 NikolaNews.com - Global Tech Updates