Earlier this year Google CEO Sundar Pichai took to the stage at the company’s annual developer conference to demonstrate Google Duplex. The virtual assistant, to be released later this year, can carry out “real world” tasks over the phone, and fool the person on the end of the line that they are talking to another human.
The computer-generated voice – complete with ‘mm-hmm’s – makes a booking at a salon, with no indication the reservation-taker is on to the trick.
There have since been suggestions the impressive demo was not the “assistant actually calling a real salon” as Pichai had claimed. But whether staged or not, the demo made it clear: Google has AI that can make a phone call and pass as human.
The company has been vague on whether the AI would be made to reveal itself when Duplex is widely released.
“It’s important to us that users and businesses have a good experience with this service, and transparency is a key part of that. We want to be clear about the intent of the call so businesses understand the context,” Google engineering leads Yaniv Leviathan and Yossi Matias said later in a blog post.
The demo raised huge concerns in the wider community. “Assistants should not pretend to be humans!” tweeted UNSW AI Professor Toby Walsh. Walsh has since called for a "Turing’s Red Flag law" in which bots should identify themselves as such up front in any human interaction.
Others called it “morally wrong” that it didn’t.
Even Siri is evasive about what it is. The developers have clearly gone for humour, but the lack of honesty has been dubbed “problematic in a context where it's not obvious you're talking to a bot”.
The paper is by @TobyWalsh and is worth a read. It specifically discusses personal assistants, and includes this example. Siri's programmers seem to be going for humor, but these responses can be problematic in a context where it's not obvious you're talking to a bot. pic.twitter.com/QOrzxAo3NU— Arvind Narayanan (@random_walker) May 9, 2018
Now new consumer research indicates is not just technologists and ethicists with concerns about deceptive bots. The general public, and Australians in particular, want it to be clear when they are interacting with AI.
A survey by the CapGemini Digital Transformation Institute of 10,000 consumers globally found that two-thirds of consumers “want to be made aware when they interact with an AI system”.
Some 17 per cent said they did not have an interest in being made aware and an equal amount said it didn’t matter.
Australian consumers were the fourth most likely to be want to be made aware of the ten countries surveyed (after India, Spain and Italy); with 70 per cent saying they’d want to be told.
On the flip side, however, 45 per cent of the same consumers said they would be comfortable delegating tasks – such as calling a hair salon to make a booking – to an AI assistant.
“I think I would be good with it making appointments for me such as spa appointments or car servicing appointments,” a US focus group respondent said, “I am good with anything that is going to make my life a little bit easier.”
Nearly half (48 per cent) of respondents globally said they were excited by the thought of a digital alter ego or assistant, and a similar number (46 per cent) said it would enhance their quality of life.
Google have more recently added that the voice call bot would have “disclosure built-in”, perhaps in response to a proposed California law requiring bots be easily identified and linked to a human user.
Although the law is more targeted at automated accounts on Facebook and Twitter, it could be expanded to Duplex-like products. Its latest amendment covers ‘digital applications’, which the technology could be covered by.
Walsh suggests call recipients should be informed if a phone call is from a bot, in a similar way that in Australia and elsewhere they must be notified if the call is being recorded.
“Perhaps in the future it will be routine to hear, ‘You are about to interact with an AI bot. If you do not wish to do so, please press 1 and a real person will come on the line shortly’,” Walsh writes in his paper proposing Turing’s Red Flag.
“One of the most dangerous times for any new technological is when the technology is first being adopted, and society has not yet adjusted to it. It may well be, as with motor cars today, society decides to repeal any Turing Red Flag laws once AI systems become the norm. But whilst they are rare, we might well choose to act a little more cautiously,” he adds.