Half a trillion dollars.
That’s the rough amount of money that counterfeiters displaced last year by selling phony products. Some 2.5% of all trade is for fake goods.
The United States is hit hardest by the scourge of counterfeit products — U.S. brands accounted in 2013 for 20% of the world’s infringed intellectual property.
When most people think about counterfeiting, they think of knock-off Louis Vuitton handbags sold on the sidewalk. But fake products also include business and enterprise products, as well as everyday consumer goods.
And counterfeiting can kill. Fake pharmaceutical drugs and counterfeit booze kill thousands of people every year. The World Health Organization estimates that 10% of all pharmaceutical drugs sold in low- and middle-income countries is either substandard or fake.
And the problem will grow: The International Trademark Association (INTA) and the International Chamber of Commerce say the value displaced by counterfeits could reach $2.3 trillion by 2022.
And the fakes are fabulous
The biggest and most dangerous trend in counterfeiting, according to Entrupy CEO Vidyuth Srinivasan, is the application of artificial intelligence (AI) to create better fakes — better fake products, counterfeit cash, fake content, fake advertising, fake websites and fake news. (Entrupy is a leader in AI-based counterfeit detection; its technology is being used to give a thumbs up to a Louis Vuitton bag in the photo at the top of this column.)
The best-understood use for AI fakery is in videos. Deepfakes (videos that convincingly add the face of one person to the body of another) use deep learning, simulated neural networks and big datasets to create convincing fake videos. In January, an app called FakeApp was launched that enabled any user to easily create face-swap videos.
Similar technology will be increasingly deployed to create better luxury goods. To oversimplify, counterfeiters will feed attributes of legitimate products into an AI system that will choose materials and guide the manufacturing process for a more convincing result.
AI to the rescue
As fakes get more convincing, they’ll eventually reach a kind of singularity, where human experts will struggle to tell the difference between the real and the fake.
Already, companies are offering sophisticated solutions to enterprises and brands.
Entrupy, as well as Red Points, Cypheme and other companies, specializes in the low-cost, high-volume identification of counterfeit products. These companies offer technologies that analyze materials, colors, packaging and other attributes to spot fakes.
IBM Research has developed something called Crypto Anchor Verifier, which is an AI counterfeit detector that uses blockchain and runs on a smartphone. To use it, you take a picture of any product, and the app then runs a comparison of that image against a database in a blockchain ledger to determine authenticity.
That database would be stocked with images of authentic items provided by the companies that make them.
It’s based on the idea that every object, including products, has distinctive optical patterns that can be identified by AI. These patterns are found in the texture, color and patterns in the materials used to make the products.
IBM says the technology could be used for anything from diamonds to cash to wine to pharmaceutical drugs.
Fakes are everywhere. Even trusted sources such as Amazon sell huge numbers of counterfeit goods, including those fulfilled by Amazon. The entire model for online commerce makes it possible for counterfeits. The pages and product details are public and replicable. And the scale makes it almost impossible to check all the products.
The Counterfeit Report (TCR), which works with brands to expose fake products, claims to have found some 58,000 counterfeit products on Amazon since May 2016. Some 35,000 products were removed based on its findings. Yet that organization was looking only for fake versions of the brands it represents. It’s reasonable to assume there are vastly higher numbers of fake products on Amazon.
Amazon uses machine learning, along with software engineers, research scientists, program managers and investigators, for its Brand Registry program, which the company says cut infringements by 99%. However, many brands aren’t part of the registry.
The Chinese e-commerce giant Alibaba created something called the Big Data Anti-Counterfeiting Alliance with 20 international brands. The initiative uses AI to detect tell-tale flaws in product listings and customer reviews.
The Chinese government even backed an appraisal center to address the problem of counterfeit products in that country. And a Beijing-based company built an app called Smart Detective that uses AI to appraise luxury goods. The app currently focuses on luggage and purses and is working on the ability to appraise jewelry.
A key element for making these systems work is secrecy. The specific criteria used to spot phony products are tightly held secrets, and they change constantly to keep the counterfeiters off balance.
Why AI can’t help with fake videos and photos
There’s all kinds of fakery out there: products, packaging, content, signatures, cash — you name it. Google demonstrated at its recent developer’s conference an initiative called Duplex, which showed that AI can produce a convincing telephone interaction.
AI-generated fake videos are especially disturbing. They’ll bring the world of fake news into video, making false stories, propaganda and disinformation even more powerful and believable.
Most disinformation has political goals. But increasingly, disinformation will be used in business.
The same AI technologies that can create these fake videos can also be used to spot them.
The U.S. military has been funding a program to figure out how to spot deepfake videos and other fake content created using AI. The Defense Advanced Research Projects Agency (DARPA) is gathering together the world’s experts for a contest to see who can create the most convincing fakes, and also who can create the best AI tool to spot fakes.
The Pentagon is not alone. Researchers at the Technical University of Munich in Germany built a deep-learning system that can automatically spot deepfake-type videos. Other organizations are doing similar research.
Unfortunately, spotting fake videos won’t matter much in the court of public opinion.
Technology is not the problem. People are.
Let’s say the Russian government creates a fake video and claims it’s real. The U.S. government says it applied AI and determined the video to be fake. As with all fake news and propaganda, educated viewers will still have to decide whom they trust. Uneducated viewers may often simply be affected by fake videos without any knowledge or concern about its origins.
Worse, real videos will be called fake by the guilty parties and propagandists.
Ultimately, the public will conclude that nothing is credible, and then the goal of the dezinformatsiya will have been achieved, despite the technology.
So while the Pentagon’s technology will help American spies, diplomats and government officials tell the real from the fake, the technology will not be able to counter its effective use for fake news or propaganda.
Why it’s time to get on the AI anti-counterfeit bus
While the war over counterfeits will never be won, it’s possible for enterprises and brands to gain the upper hand.
By democratizing and automating anti-counterfeit systems, it will become possible to stop fakes at scale. But it won’t work without the cooperation and participation of the companies that make legitimate products.
And that’s why it’s vital for any company with products, content, brands or other assets that can be counterfeited to explore the options for how to best use AI to counter the counterfeiters.
Because the criminals will be using AI to steal your intellectual property. You’re going to need better AI to stop them.
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.