A brief look at a report by KPMG International Data & Analytics
Since I started my internship with KPMG I decided to spend a few days to look at a few of the reports they have created on the topic of AI Safety the last few years. One of the closest reports I could find was a report on trust in analytics since part of the focus of the report was on artificial intelligence (AI), and the question of trust seem relevant. As such how can this analytic trust issue be resolved according to KPMG? I will touch upon the foundations briefly and how AI is discussed in the report.
The report Guardians of Trust was written in 2018 and is a study KPMG International commissioned from Forrester Consulting to survey almost 2,200 global information technology (IT) and business decision makers involved in strategy for data initiatives. This survey found that just 35 percent of them have a high level of trust in their own organisation’s analytics. There can therefore be said to be a low-level of trust in analytics. According to KPMG International trust underpins reputation, customer satisfaction, loyalty and other intangible assets, which now represent nearly 85 percent of the total value of companies in the S&P 500.
What is the foundation of trust in this report?
It is argued that the governance of machines should not be fundamentally different from the governance of humans and it should be integrated into the structure of the entire enterprise. They argue that trust in an age of digital transformation:
- Influences reputation
- Drives customer satisfaction and loyalty
- Inspires employees
- Enables global markets to function
To address this lack of trust it is argued there needs to be a foundation. The foundation is visualised as such:
In this regard they have a list of heuristic rules (‘key takeaways’) to abide by:
- If you can’t measure it, you can’t manage it
- Prioritize risks
- Create trust-impact personas
- Create a buddy system
- Checklist manifesto for data and analytics
- Don’t let the board off the hook
- Be flexible with horses for courses
- Create a mesh governance framework
At a time when machines are working in parallel with people, this study points to a clear need for proactive governance of analytics in order to build trust.
There is a clear focus on AI as part of the issue in regards to trust and a potential risk going forward.
“The widespread use of AI will make it imperative — and more difficult — to ensure trusted analytics.”
According to the report AI can both disrupt and create trust depending on how it is used: “The age of AI also offers new ways of protecting public trust as we shift from humans towards machines. In audit, for example, cognitive systems can analyze millions of records and identify patterns to create more insights on a company’s processes, controls and reporting. Algorithms, meanwhile, can be designed to reduce human biases in decision making, and blockchain can offer greater data security and new distributed trust models.” They describe this digital shift as a double-edged sword. There are several issues in this regard:
- AI systems may be seen as a ‘black box,’ making important decisions when few people can fully understand how.
- The ‘superhuman’ behaviour problem. sometimes their performance is almost ‘too good’ and we find ourselves unable to predict the consequences.
- The ‘subhuman’ behavior problem. For example, people have been ‘injured by GPS’ when following directions that are outdated and wrong. Visual recognition is great in some areas, but less so in others
- The ‘bad-human’ behavior problem. Algorithms that use machine learning can also pick up bad habits or biases from the human behavior they seek to emulate.
Who is responsible?
There is a looming question of accountability and the report argues: “ While we may like to blame our machines, they are simply machines and, as such, cannot be held accountable for the decisions or insights they produce.” Most respondents (technical decision-makers) argued the responsibility lies with technology functions and service providers, the: “…organization that developed the software, ahead of the manufacturer, the passenger and regulators.”
Therefore it is said to be important to proactively govern analytics in ways that build trust, resilience, integrity, quality and effectiveness. The person who is regarded to have the primary responsibility is the Chief Information Officer (CIO). However it seem have resources or want to take on greater responsibility for governance of AI and analytics across core business.
In the report there is an interview with a General Manager in Microsoft called Emma Williams. She mentions there is a focus currently to blend EQ, or ‘emotional intelligence,’ with traditional IQ. They use and approach called AI FATE: a broader context that includes fairness, accountability, transparency and ethics. Emma mentions her team includes anthropologists, cognitive behavior, as well as ethicists, PhDs in human psychology, UX designers and psychologists. As such a wide range of skills are necessary to ensure responsible and trustworthy AI.
Governance of AI
Five top steps is outlined in the report and it may be a good place to start if you are lost on where to begin with AI governance.
- Develop standards to provide guardrails for all organizations
- Modernize regulations to build confidence in D&A
- Increase transparency of algorithms and methodologies
- Create professional codes for data scientists
- Strengthen assurance mechanisms both internal and external.
“AI can increasingly allow auditors to obtain and analyse information from non-traditional sources, such as all forms of media — print, digital and social — and, combined with other information, draw a deeper, more robust understanding of potential business risks”
KPMG International describes a few examples of essential controls to inspire management of AI for an analytical enterprise. I have selected a few of their suggestions: (1) partnering and ‘parenting’ algorithms with a nominated human partner; (2) explainable AI, although technical explanation can be made it needs to be understood by teams or even the organisation as a whole; (3) ethics boards to develop standards; (4) and human-centered machine learning.