Monday, January 25, 2021
  • Setup menu at Appearance » Menus and assign menu to Top Bar Navigation
Advertisement
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
No Result
View All Result
Home Machine Learning

Four Production Pillars for Trustworthy AI

January 30, 2019
in Machine Learning
Four Production Pillars for Trustworthy AI
589
SHARES
3.3k
VIEWS
Share on FacebookShare on Twitter

Credit: Google News

The issue of AI Trust

You might also like

AWS Machine Learning Specialty Online Course

Why Should Python Be Used in Machine Learning?

AWS unveils fresh migration, cloud governance and machine learning partner training

My ten year old daughter recently told me that she does not expect to get a driving license. Upon asking her why, she explained that when she is old enough to drive, she expects all cars to be self-driving. She further elaborated that she should need to be even more vigilant in a self-driving car, because, in her words – “I always know what I am thinking. But who knows what those cars are thinking!”. As AI and ML permeate more businesses and become part of daily life, natural human fears are being expressed in various ways, from individual to corporate concerns to government regulations. News stories appear daily, detailing AI mistakes that led to corporate losses or embarrassment [2 ,3,4,7]. Other examples include impacts to human loss of health and life [5] and appearance of bias and unfair practices [6, 7].

Corporations are taking steps to regulate AI behaviors. A recent example is Amazon’s decision to stop an AI human resources tool due to bias against women [3]. New regulations and reviews are emerging  [8, 9,10]. A good example is New York City – which set precedent in 2018 by creating a task force to examine and audit algorithmic use [8]. These and similar initiatives demonstrate how individuals, corporations and governments are struggling to manage risk while encouraging the tremendous potential of AI technologies.

As production AI usages grow, good operational ML (MLOps) practices [1] will be needed to ensure that production ML systems deliver and maintain quality so that the users combine AI benefits with growing trust in their AI systems. This blog describes a key component of such a practice – ML Integrity.

ML Integrity: A Necessary Condition for AI Trust

What these concerns demonstrate is a lack of trust, which leads to the question – how does one grow trust in a new technology?  Many have weighed in on this topic [examples in 11, 12] and pointed out that trust is a complex concept that intertwines correct operation with social values (such as whether a decision made by an AI is morally good in a human context). While trust has many facets, a core component is Integrity. Integrity may not be a sufficient criterion for trust –  a system that demonstrates integrity operates correctly as defined by its designers, but whether that is enough may depend on whether the humans made design decisions that society considers acceptable. However, integrity is a necessary criterion for trust – a system that does not demonstrate integrity cannot be trusted since it is not executing as planned.

Other domains, including other software arenas, have established the concept of integrity as a core element for trust. For example, Wikipedia defines Computer System Integrity [13] as:

  • That condition of a system wherein its mandated operational and technical parameters are within the prescribed limits.
  • The quality of an Automated Information System when it performs its intended function in an unimpaired manner, free from deliberate or inadvertent unauthorized manipulation of the system.
  • The state that exists when there is complete assurance that under all conditions an IT system is based on the logical correctness and reliability of the operating system, the logical completeness of the hardware and software that implement the protection mechanisms, and data integrity.

In order for humans to trust an AI, the algorithms, software and production deployment systems that train AI models and deliver AI predictions must behave within specified norms and in a predictable and understandable way – i.e with ML Integrity.  ML Integrity is the core criterion that a machine learning (or deep learning, reinforcement learning etc.) algorithm must demonstrate in practice and production to drive trust.

Four pillars of ML Integrity in Production Systems

Production ML systems have many moving parts. At the center is the Model, the trained AI algorithm that is used to generate predictions. However models have to be trained and then deployed in production. Many factors (example: the training dataset, training hyper-parameters, production code implementation, and incoming live production data) combine to generate a prediction. For this entire system and flow to behave with integrity, four pillars must be established. These are shown in the figure below:

Four pillars of ML IntegrityNisha Talagala

  • ML Health: the ML model and production deployment system must be healthy – i.e behaving in production as expected and within norms specified by the data scientist.
  • ML Explainability: it must be possible to determine why the ML algorithm behaved the way that it did for any particular prediction and what factors led to the prediction.
  • ML Security – the ML algorithm must be healthy and explainable in the face of malicious or non-malicious attacks – i.e. efforts to change or manipulate its behavior.
  • ML Reproducibility: All predictions must be reproducible. If an outcome cannot be faithfully reproduced, there is no way to know for sure what led to the outcome or debug issues.

Each pillar is challenging in its own right and require specialized practices:

  • ML Health: Standard failure detection techniques cannot detect suboptimal prediction patterns. Production systems need specialized techniques to assess whether the models used for prediction are behaving optimally. For example, Data Deviation detectors can show when production data differs from patterns used during training, indicating that models may not have adequate information to make predictions [14, 15].
  • Explainability: While some algorithms (like Decision Trees) are explainable (ie one can provide a human interpretable explanation for how the prediction was generated), others like Neural Networks are not (yet) explainable.
  • Security: ML generates new security challenges at both the algorithmic and data management levels [16,17,18,19]. Corruption of datasets can cause a model to be mistrained and then generate damaging predictions during use. Similarly, studies have shown that ML models can be fooled into making incorrect predictions by distorting incoming data.
  • Reproducibility: the large number of artifacts, algorithm settings, code versions, system parameters, datasets etc. that contribute to generate a single prediction can make reproducibility challenging. To make AI truly reproducible, a precise lineage and provenance must be maintained for every prediction [20] .

Furthermore, these pillars are not independent. For example, research presented at IEEE ICMLA last year showed that deviations in production data can even cause explainability techniques to lose validity [21].

What about Performance?

All the above areas can affect the quality of an ML algorithm output – ie whether the ML algorithm generates “acceptably good” predictions. An orthogonal but also important area is performance – i.e. was the desired prediction generated as quickly as needed? Performance is a critical element that is complementary and orthogonal to integrity. Even a high integrity ML application will not be usable if it cannot generate predictions fast enough. However, the reverse is also true – predictions that cannot be trusted are not usable (and are dangerous) even if they show up at perfect speeds.

Driving ML Integrity in 2019 and beyond

Each of these areas is a focus of research and practice optimization across the industry. Forums like the new USENIX Conference on Operational ML [22] provide venues where these topics can be debated and best practices shared and defined.  Industry vendors and institutions are also defining MLOps, best practices for ML algorithm deployment, testing, monitoring and lifecycle management that can help organizations scale their ML production initiatives while maintaining ML Integrity [1,23,24].

I believe that 2019 is the breakout year for ML in production. As model development tools mature and more data scientists get trained, the number of models straining to get into production will only grow. As industries attempt to balance their desire for innovation with their need for risk management, MLOps practices will be needed to ensure that production ML systems deliver and maintain ML Integrity so that the users combine AI benefits with growing trust in their AI systems.

References

[1] https://en.wikipedia.org/wiki/MLOps

[2] https://www.theregister.co.uk/2012/08/03/bad_algorithm_lost_440_million_dollars/

[3] https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

[4] https://www.bbc.com/news/technology-44561838

[5] https://medium.com/@mijordan3/artificial-intelligence-the-revolution-hasnt-happened-yet-5e1d5812e1e7

[6] https://psmag.com/social-justice/removing-racial-bias-from-the-algorithm

[7] https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist

[8] https://www1.nyc.gov/office-of-the-mayor/news/251-18/mayor-de-blasio-first-in-nation-task-force-examine-automated-decision-systems-used-by

[9] https://www.acm.org/articles/bulletins/2017/january/usacm-statement-algorithmic-accountability

[10] https://cyber.jotwell.com/the-gdprs-version-of-algorithmic-accountability/

[11] https://www.ibm.com/watson/advantage-reports/future-of-artificial-intelligence/building-trust-in-ai.html

[12] https://www.zdnet.com/article/in-an-ai-powered-economy-trust-must-be-your-companys-highest-core-value/

[13] https://en.wikipedia.org/wiki/System_integrity

[14] Ghanta, S. ML Health: Taking the Pulse of ML Pipelines in Production. Grace Hopper Conference 2018. https://s3.amazonaws.com/prg-s3-production/app/uploads/42182/20180926090023-GHC_presentation.pdf

[15] https://www.prweb.com/releases/parallelm_enables_the_next_generation_of_ai_deployments_with_new_capabilities/prweb15911973.htm

[16] https://elie.net/blog/ai/attacks-against-machine-learning-an-overview/

[17] https://dzone.com/articles/security-attacks-analysis-of-machine-learning-mode

[18] https://en.wikipedia.org/wiki/Adversarial_machine_learning

[19] https://www.bbc.com/news/technology-41845878

[20] Model Governance: Reducing the Anarchy of Production ML. USENIX ATC 2018. https://www.usenix.org/conference/atc18/presentation/sridhar

[21] https://openreview.net/references/pdf?id=SkhyzkuzQ

[22] https://www.usenix.org/conference/opml19

[23] Why MLOps (and not just ML) is your Business’ new Competitive Frontier. https://aibusiness.com/mlops-parallelm-competitive-edge/

[24] Operational Machine Learning: Seven Considerations for Successful MLOps. https://www.kdnuggets.com/2018/04/operational-machine-learning-successful-mlops.html

 

Credit: Google News

Previous Post

EU to tech giants: Step up fake news fight before European elections

Next Post

Unity Technologies Launches AI Challenge Designed to Push Limits

Related Posts

AWS Machine Learning Specialty Online Course
Machine Learning

AWS Machine Learning Specialty Online Course

January 25, 2021
Why Should Python Be Used in Machine Learning?
Machine Learning

Why Should Python Be Used in Machine Learning?

January 25, 2021
AWS unveils fresh migration, cloud governance and machine learning partner training
Machine Learning

AWS unveils fresh migration, cloud governance and machine learning partner training

January 25, 2021
AI, machine learning made accessible on resource-constrained wireless IoT chips
Machine Learning

AI, machine learning made accessible on resource-constrained wireless IoT chips

January 25, 2021
Machine Learning

How Will Global Machine Learning in Healthcare Market React from 2021 Onwards? – Murphy’s Hockey Law

January 25, 2021
Next Post
Unity Technologies Launches AI Challenge Designed to Push Limits

Unity Technologies Launches AI Challenge Designed to Push Limits

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

January 6, 2019
Microsoft, Google Use Artificial Intelligence to Fight Hackers

Microsoft, Google Use Artificial Intelligence to Fight Hackers

January 6, 2019

Categories

  • Artificial Intelligence
  • Big Data
  • Blockchain
  • Crypto News
  • Data Science
  • Digital Marketing
  • Internet Privacy
  • Internet Security
  • Learn to Code
  • Machine Learning
  • Marketing Technology
  • Neural Networks
  • Technology Companies

Don't miss it

How to Change the WordPress Admin Login Logo
Learn to Code

JavaScript Wake Lock API

January 25, 2021
DreamBus botnet targets enterprise apps running on Linux servers
Internet Security

DreamBus botnet targets enterprise apps running on Linux servers

January 25, 2021
Tracking Pen Testing Trends and Challenges
Internet Privacy

Tracking Pen Testing Trends and Challenges

January 25, 2021
All You Should Know About Data Security in 2020/2021
Data Science

All You Should Know About Data Security in 2020/2021

January 25, 2021
AWS Machine Learning Specialty Online Course
Machine Learning

AWS Machine Learning Specialty Online Course

January 25, 2021
Beware — A New Wormable Android Malware Spreading Through WhatsApp
Internet Privacy

Beware — A New Wormable Android Malware Spreading Through WhatsApp

January 25, 2021
NikolaNews

NikolaNews.com is an online News Portal which aims to share news about blockchain, AI, Big Data, and Data Privacy and more!

What’s New Here?

  • JavaScript Wake Lock API January 25, 2021
  • DreamBus botnet targets enterprise apps running on Linux servers January 25, 2021
  • Tracking Pen Testing Trends and Challenges January 25, 2021
  • All You Should Know About Data Security in 2020/2021 January 25, 2021

Subscribe to get more!

© 2019 NikolaNews.com - Global Tech Updates

No Result
View All Result
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News

© 2019 NikolaNews.com - Global Tech Updates