Menu
Menu
Does your company need an AI ethics committee?

Does your company need an AI ethics committee?

Oversight not optional for artificial intelligence says Accenture expert

Consumer faith in businesses in most markets worldwide is wallowing in “stagnant distrust” – according to the Edelman Trust Barometer – and Australia is no exception.

It’s little wonder. The recent Banking Royal Commission exposed some shocking misdemeanors across the financial services sector, Facebook appears to be trapped in an endless cycle of data misuse scandals, while breaches of customer data held by Australia’s biggest businesses are now depressingly frequent.

According to a recent CA Technologies commissioned report by analyst firm Frost & Sullivan, Australian consumers had the lowest overall level of ‘digital trust’ – their confidence in brands to appropriately collect, store and use their digital information – in the world.

Across nearly all sectors consumer trust is low and in decline, ranking only slightly higher than citizens’ trust in politicians and the media.

At first appraisal, an organisation’s use of artificial intelligence related technologies would appear to put those pitifully low trust stocks at risk. There is no end of AI ‘gone wrong’ examples from racist chatbots to sexist recruitment tools.

But the technologies also present an opportunity to raise consumer faith in your organisation, says Accenture APAC’s AI delivery leader Amit Bansal. “You can use AI to gain that trust,” he says.

One of the most effective ways to do so, Bansal says, is by establishing an active, transparent and vocal AI ethics committee.

Great power, great responsibility

Adoption of AI technologies by the enterprise is relatively high in Australia. Of the ten countries surveyed in a recent study commissioned by SAS, Accenture Applied Intelligence and Intel, adoptions rates in Australia were the highest in the world.

Some 78 per cent of Australian businesses in the survey said they were using the technologies, chasing rewards including more reliable systems, a better security posture, higher productivity through the elimination of manual tasks and improved customer service.

Data61 reports that AI and related technologies’ growth could be worth $315 billion to the Australian economy over the next decade.

These businesses need to be wary of the ethical considerations when rolling out AI-powered systems, Bansal says.

“For organisations to start adopting AI they need to be sure that if they’re building new AI engines that they are thinking through responsibly about how they are actually building it – are they building bias into it and so on,” Bansal says, adding that they also need to be able to test for bias in third party systems.

“They need to make sure they’re building ethically,” he adds.

Many companies – particularly those considered early adopters of AI – are doing so with the help of internal AI ethics committees. The SAS/Accenture/Intel survey found that out of the Australian companies that have already adopted AI in some form, 72 per cent have established such a committee (slightly higher than the global average of 70 per cent).

“It’s a group of people who are in charge of making sure that, if you’ve set some guide rails, that people are following those guide rails, and the principles and culture are being adhered to. It’s not about building a large amount of bureaucratic processes – that kind of defeats the purpose of applying AI and automation,” Bansal explains.

“It’s really a checkpoint to make sure that, if you’ve agreed this is how we’re going to operate as an organisation – for example we’re not going to use post codes because they discriminate against certain races or demographics – than the ethics committee needs to make sure that ethos is being held up and followed and nobody’s going ‘well we can make a quick buck let’s just do it’,” he adds.

Typically an AI ethics committee would be made up of the company’s analytics or AI leadership, business leaders from the functions using the tools, the chief data officer, chief information officer as well as the individuals “in charge of building the solutions,” Bansal suggests.

In time, the ‘chief artificial intelligence officer’ could be joining them, an emerging role that has been observed popping up within a handful of companies in North America.

Transparency key

While some companies have been open about who sits on their AI ethics boards – such as SAP and Axon – others have not – like Google’s DeepMind.

Once established, the committee should be open about what it does to investors and customers, suggests Bansal. In doing so, trust levels internally and externally will rise, he says.

“[The committee’s work] should be part of the annual report: this is what we’ve done and achieved. Just as you have what’s been done for the community or the environment. It should just become part of what they do and be published,” he says.

“I’d be giving an update quarterly. That’s how you start building that confidence and trust with your customers and employees.”

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags ethicsexecutivesaccentureAICDOmachine learningcommitteeNeural Networksethical

More about APACAustraliaCA TechnologiesEdelmanFacebookGoogleIntelSAPSAS

Show Comments