Saturday, March 6, 2021
  • Setup menu at Appearance » Menus and assign menu to Top Bar Navigation
Advertisement
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
No Result
View All Result
Home Machine Learning

When AI in healthcare goes wrong, who is responsible? — Quartz

September 20, 2020
in Machine Learning
When AI in healthcare goes wrong, who is responsible? — Quartz
586
SHARES
3.3k
VIEWS
Share on FacebookShare on Twitter

Artificial intelligence can be used to diagnose cancer, predict suicide, and assist in surgery. In all these cases, studies suggest AI outperforms human doctors in set tasks. But when something does go wrong, who is responsible?

There’s no easy answer, says Patrick Lin, director of Ethics and Emerging Sciences Group at California Polytechnic State University. At any point in the process of implementing AI in healthcare, from design to data and delivery, errors are possible. “This is a big mess,” says Lin. “It’s not clear who would be responsible because the details of why an error or accident happens matters. That event could happen anywhere along the value chain.”

You might also like

Will AI Replace Lawyers & Other Myths: Legal AI Mythbusters | Onit

Reducing Blind Spots in Cybersecurity: 3 Ways Machine Learning Can Help

Explainable Machine Learning, Model Transparency, and the Right to Explanation « Machine Learning Times

How AI is used in healthcare

Design includes creation of both hardware and software, plus testing the product. Data encompasses the mass of problems that can occur when machine learning is trained on biased data, while deployment involves how the product is used in practice. AI applications in healthcare often involve robots working with humans, which further blurs the line of responsibility.

Responsibility can be divided according to where and how the AI system failed, says Wendall Wallace, lecturer at Yale University’s Interdisciplinary Center for Bioethics and the author of several books on robot ethics. “If the system fails to perform as designed or does something idiosyncratic, that probably goes back to the corporation that marketed the device,” he says. “If it hasn’t failed, if it’s being misused in the hospital context, liability would fall on who authorized that usage.”

Surgical Inc., the company behind the Da Vinci Surgical system, has settled thousands of lawsuits over the past decade. Da Vinci robots always work in conjunction with a human surgeon, but the company has faced allegations of clear error, including machines burning patients and broken parts of machines falling into patients.

Some cases, though, are less clear-cut. If diagnostic AI trained on data that over-represents white patients then misdiagnoses a Black patient, it’s unclear whether the culprit is the machine-learning company, those who collected the biased data, or the doctor who chose to listen to the recommendation. “If an AI program is a black box, it will make predictions and decisions as humans do, but without being able to communicate its reasons for doing so,” writes attorney Yavar Bathaee in a paper outlining why the legal principles that apply to humans don’t necessarily work for AI. “This also means that little can be inferred about the intent or conduct of the humans that created or deployed the AI, since even they may not be able to foresee what solutions the AI will reach or what decisions it will make.”

Inside the AI black box

The difficulty in pinning the blame on machines lies in the impenetrability of the AI decision-making process, according to a paper on tort liability and AI published in the AMA Journal of Ethics last year. “For example, if the designers of AI cannot foresee how it will act after it is released in the world, how can they be held tortiously liable?,” write the authors. “And if the legal system absolves designers from liability because AI actions are unforeseeable, then injured patients may be left with fewer opportunities for redress.”

AI, as with all technology, often works very differently in the lab than in a real-world setting. Earlier this year, researchers from Google Health found that a deep-learning system capable of identifying symptoms of diabetic retinopathy with 90% accuracy in the lab caused considerable delays and frustrations when deployed in real life.

Despite the complexities, clear responsibility is essential for artificial intelligence in healthcare, both because individual patients deserve accountability, and because lack of responsibility allows mistakes to flourish. “If it’s unclear who’s responsible, that creates a gap, it could be no one is responsible,” says Lin. “If that’s the case, there’s no incentive to fix the problem.” One potential response, suggested by Georgetown legal scholar David Vladeck, is to hold everyone involved in the use and implementation of the AI system accountable.

AI and healthcare often work well together, with artificial intelligence augmenting the decisions made by human professionals. Even as AI develops, these systems aren’t expected to replace nurses or automate human doctors entirely. But as AI improves, it gets harder for humans to go against machines’ decisions. If a robot is right 99% of the time, then a doctor could face serious liability if they make a different choice. “It’s a lot easier for doctors to go along with what that robot says,” says Lin.

Ultimately, this means humans are ceding some authority to robots. There are many instances where AI outperforms humans, and so doctors should defer to machine learning. But patient wariness of AI in healthcare is still justified when there’s no clear accountability for mistakes. “Medicine is still evolving. It’s part art and part science,” says Lin. “You need both technology and humans to respond effectively.”

Credit: Google News

Previous Post

Kanye West Is Dropping Truths, but Here’s Why His Message Gets Lost

Next Post

3 Reasons Trump’s TikTok Ban is Bad News For Facebook

Related Posts

Will AI Replace Lawyers & Other Myths: Legal AI Mythbusters | Onit
Machine Learning

Will AI Replace Lawyers & Other Myths: Legal AI Mythbusters | Onit

March 6, 2021
Machine learning the news for better macroeconomic forecasting
Machine Learning

Reducing Blind Spots in Cybersecurity: 3 Ways Machine Learning Can Help

March 6, 2021
The ML Times Is Growing – A Letter from the New Editor in Chief – Machine Learning Times
Machine Learning

Explainable Machine Learning, Model Transparency, and the Right to Explanation « Machine Learning Times

March 5, 2021
How to Boost Machine Learning in Healthcare Market Compound Annual Growth Rate (CAGR)? – KSU
Machine Learning

How to Boost Machine Learning in Healthcare Market Compound Annual Growth Rate (CAGR)? – KSU

March 5, 2021
Comprehensive Report on Machine Learning Market 2021 | Size, Growth, Demand, Opportunities & Forecast To 2027
Machine Learning

Comprehensive Report on Machine Learning Market 2021 | Size, Growth, Demand, Opportunities & Forecast To 2027

March 5, 2021
Next Post
3 Reasons Trump’s TikTok Ban is Bad News For Facebook

3 Reasons Trump’s TikTok Ban is Bad News For Facebook

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

January 6, 2019
Microsoft, Google Use Artificial Intelligence to Fight Hackers

Microsoft, Google Use Artificial Intelligence to Fight Hackers

January 6, 2019

Categories

  • Artificial Intelligence
  • Big Data
  • Blockchain
  • Crypto News
  • Data Science
  • Digital Marketing
  • Internet Privacy
  • Internet Security
  • Learn to Code
  • Machine Learning
  • Marketing Technology
  • Neural Networks
  • Technology Companies

Don't miss it

These two unusual versions of ransomware tell us a lot about how attacks are evolving
Internet Security

These two unusual versions of ransomware tell us a lot about how attacks are evolving

March 6, 2021
Researchers Find 3 New Malware Strains Used by SolarWinds Hackers
Internet Privacy

Researchers Find 3 New Malware Strains Used by SolarWinds Hackers

March 6, 2021
Analysis: The increasing scope of UK cryptocurrency regulation
Blockchain

Analysis: The increasing scope of UK cryptocurrency regulation

March 6, 2021
Will AI Replace Lawyers & Other Myths: Legal AI Mythbusters | Onit
Machine Learning

Will AI Replace Lawyers & Other Myths: Legal AI Mythbusters | Onit

March 6, 2021
Labeling Service Case Study — Video Annotation — License Plate Recognition | by ByteBridge | Feb, 2021
Neural Networks

Labeling Service Case Study — Video Annotation — License Plate Recognition | by ByteBridge | Feb, 2021

March 6, 2021
Microsoft: We’ve found three more pieces of malware used by the SolarWinds attackers
Internet Security

Microsoft: We’ve found three more pieces of malware used by the SolarWinds attackers

March 6, 2021
NikolaNews

NikolaNews.com is an online News Portal which aims to share news about blockchain, AI, Big Data, and Data Privacy and more!

What’s New Here?

  • These two unusual versions of ransomware tell us a lot about how attacks are evolving March 6, 2021
  • Researchers Find 3 New Malware Strains Used by SolarWinds Hackers March 6, 2021
  • Analysis: The increasing scope of UK cryptocurrency regulation March 6, 2021
  • Will AI Replace Lawyers & Other Myths: Legal AI Mythbusters | Onit March 6, 2021

Subscribe to get more!

© 2019 NikolaNews.com - Global Tech Updates

No Result
View All Result
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News

© 2019 NikolaNews.com - Global Tech Updates