Connect with us

Hi, what are you looking for?

HEADLINES

As the digital trust gap continues, who’s responsible when analytics go wrong?

IMAGE CREDIT: PIXABAY.COM

As the trust gap around data and analytics (D&A) and artificial intelligence (AI) continues, just 35 percent of executives say they have a high level of trust in the way their organization uses D&A. 

Moreover, concerns over the risks of D&A and AI are high: over 65 percent of executives have some reservations or active mistrust in their data and analytics, and 92 percent are concerned about the negative impact of D&A on corporate reputation. That said, a majority of senior executives (62 percent) say that technology functions, not the C-level and functional areas, bear responsibility when a machine or an algorithm goes wrong, according to a new global survey from KPMG International.

KPMG International’s Guardians of trust report suggests that the growing interrelationship between human and machine calls for stronger accountability at the C-level rather than with the tech functions, and proactive governance with strategic and operational controls that ensure and maintain trust. As companies make the shift to fully digital, analytically-driven enterprises, the management of machines is becoming as important as the management of people, the survey also asserts.

“Once analytics and AI become ubiquitous, it will be imperative and more difficult to manage trust,” said Thomas Erwin, Global Head of KPMG Lighthouse – Center of Excellence for D&A and Intelligent Automation. “With the rapid take-up of predictive analytics, we should prepare now to bring appropriate governance to this Wild West of algorithms. The governance of machines must become a core part of the governance of the whole organization, with the goal being to match the power and risk of D&A with the wisdom to use it well.”

The research, which surveyed 2,190 senior executives from nine countries, also found that the US and the UK are least likely to trust their D&A, with the US distrust at 42 percent and the UK at 43 percent. Only small percentages in Brazil and India mistrust their D&A (15 and 8 percent, respectively).

Advertisement. Scroll to continue reading.

Who’s responsible when things go wrong?

Even with the low confidence over the reputational and financial risks of analytics errors or misuse, respondents were not clear about who should be accountable if a poor business decision results in financial loss, or the loss of customers.

In addition to the 62 percent who said the primary responsibility should lie with technology functions within their organizations, 25 percent thought it was on the shoulders of the core business, and 13 percent felt it should be regulatory and control functions.

Taking a closer look at which roles within the C-suite should hold the blame when analytics go wrong, the broad distribution of responses suggest a lack of clarity: only 19 percent said the CIO, 13 percent said the Chief Data Officer, and only seven percent said C-level executive decision makers such as the CEO.

“Our survey of senior executives is telling us that there is a tendency to absolve the core business for decisions made with machines,” said Brad Fisher, US Data & Analytics Leader and a partner with KPMG in the US. “This is understandable given technology’s legacy as a support service and the so-called ‘experts’ in all technical matters. However, it’s our view that many IT professionals do not have the domain knowledge or the overall capacity required to ensure trust in D&A. We believe the responsibility lies with the C-suite.”

Advertisement. Scroll to continue reading.

What should good governance look like?

Respondents’ uncertainty about who is accountable begs the question of what proactive governance should be in place to ensure and protect the use of analytics.

“As organizations begin to think about behavior of machines as parallel to the behavior of people, they should also consider new models of governance to support the leaps in trust that the human-machine workforce requires,” Erwin said. “At a fundamental level, accountability of machines must be held firmly by the CEO and functional leaders.”

Based on recommendations from respondents, there is a strong indication that any governance framework should include standards and controls beyond the technical, covering strategic, cultural and ethical areas – the domains of the C-suite.

The five recommendations for building trust within an organization according to respondents in the survey are:

Advertisement. Scroll to continue reading.

1) Develop standards to create effective policies and procedures for all organizations

2) Improve and adapt regulations to build confidence in D&A

3) Increase transparency of algorithms and methodologies

4) Create professional codes for data scientists

5) Strengthen internal and external assurance mechanisms that validate and identify areas of weakness

Advertisement. Scroll to continue reading.

“Building and ensuring trust across the analytics/AI lifecycle requires organized, scalable and distributed approaches,” Erwin said. “We are seeing a lot of businesses experimenting in this area which will likely drive future standards and new governance frameworks.”

Advertisement
Advertisement
Advertisement

Like Us On Facebook

You May Also Like

HEADLINES

The integration of streaming speech-to-speech (S2S) capabilities – part of the Realtime API – will enable over 300,000 Twilio customers and more than 10...

HEADLINES

Here are some areas in which AI can be integrated in ride-hailing services and how it can be further maximized in the future.

White Papers

AI is central to this shift, with businesses utilizing more dynamic models and metrics, enhancing interoperability between tools such as Customer Data Platforms (CDPs)...

HEADLINES

Male applicants with AI capital received an interview invitation in 54% of cases, whereas male applicants without AI capital were invited to interview in...

HEADLINES

Other AI systems demonstrated the ability to bluff in a game of Texas hold ‘em poker against professional human players, to fake attacks during...

HEADLINES

Driven by a fear of falling behind competitors, many executives are aggressively pursuing AI integration, resulting in plans to increase AI spending by 45%...

HEADLINES

The new AI capabilities will help organizations generate more sales faster by automating time-consuming tasks and enabling front office professionals to more precisely target,...

HEADLINES

NVIDIA Avatar Cloud Engine (ACE) for speech and animation, NVIDIA NeMo for language, and NVIDIA RTX for ray-traced rendering are the building blocks that enable developers to...

Advertisement