Mimics in the Digital World
How do we identify digital mimics that might be slithering through our networks.
Long ago, while working in a lab on the second floor of Lefevre Hall, I was talking with my lab mates when a snake slithered under the door into our workspace. It was an escapee from the herpetology lab at the end of the hall, small and covered with brightly colored stripes of yellow, black, and red. It was either a harmless scarlet king snake or a deadly coral snake. We started trying to remember the rhyme to tell the difference.
Is it "Red next to Black is a friend of Jack?"
Or "Yellow next to Black?"
Confidence in our collective memory quickly disappeared, and we jumped up onto the lab benches for reassurance while our new colleague explored the lab floor. I crawled over to the phone and calmly called the herpetology tech, asking him to come get his %@&ing snake.
Of course, it was a king snake and posed no threat to human life. King snakes mimic the coloring of coral snakes to be avoided by predators despite having no poison. Mimics of this type are common in nature for good reason. The coral snake had to do all the hard evolutionary work of concocting a suitable poison to ward off predators and figuring out a coloring pattern in the visible spectrum of the predators in its habitat. All the king snake had to do was figure out how to look like a coral snake.
Biologists call this Batesian mimicry, when a species (the mimic) appears as a species that possesses a positive attribute (the model). The mimicry of the king snake doesn't fool the coral snake, however. It knows the difference, and so does the king snake. Snakes use multiple factors besides visuals to identify each other, mainly scent or pheromones and behaviors. As such, coral and king snakes don't accidentally mate.
Mimics also exist in our digital world. In the digital ecosystem, the model possesses the positive attributes of power or money, and the mimic wants access to that attribute for themselves. Some digital mimics are effective but low-fidelity (identify thieves), and others are high-fidelity (deepfakes) that can fool even those closest to the model.
Sometimes, digital mimics do very well for themselves.
A finance worker at a multinational firm was tricked into paying out $25 million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call, according to Hong Kong police.
The elaborate scam saw the worker duped into attending a video call with what he thought were several other members of staff, but all of whom were in fact deepfake recreations, Hong Kong police said at a briefing on Friday.
“(In the) multi-person video conference, it turns out that everyone [he saw] was fake,” senior superintendent Baron Chan Shun-ching told the city’s public broadcaster RTHK.
Believing everyone else on the call was real, the worker agreed to remit a total of $200 million Hong Kong dollars – about $25.6 million, the police officer added.
—From CNN
However, when multiple factors are used for identification, even good mimics can be detected.
Earlier this month, a Ferrari NV executive received a series of unexpected messages, seemingly from the CEO. The messages, appearing to be from Chief Executive Officer Benedetto Vigna, hinted at a significant acquisition and requested the executive’s assistance. Despite appearing legitimate, these messages raised suspicion as they didn’t come from Vigna’s usual business number, and the profile picture, though it depicted Vigna, was slightly different.
According to sources familiar with the incident, what followed was an attempt to use deepfakes to carry out a live phone conversation and infiltrate Ferrari. The executive, who received the call, quickly sensed something was amiss, preventing any potential damage. The voice impersonating Vigna was convincing, mimicking his southern Italian accent perfectly.
To verify the caller’s identity, the executive asked a specific question: What was the title of the book Vigna had recently recommended? The call ended abruptly when the impersonator couldn’t answer. This incident prompted Ferrari to launch an internal investigation, though company representatives declined to comment on the matter.
—From FirstPost
In the security world, we call this type of verification multifactor authentication, and it is broken down into three basic categories or factors:
Something you know (e.g., a password, a birthdate, mom's maiden name)
Something you are (e.g., face scan, thumbprint scan, voice print)
Something you have (e.g., a key, a phone, a photo ID).
Of course, most of us encounter multifactor authentication when we try to log in to a computer or website and also have to enter a code from a text message. Multifactor authentication feels normal, even if tedious, when dealing with computer systems or smartphones.
But applying multifactor authentication to other humans, including our loved ones, is socially awkward and hard to remember to do, especially when the mimic creates a sense of urgency. Consider this deepfake audio that I made of myself two years ago using audio of my talks that I found on the web (you might have to turn up the volume)
Not only would a scammer be able to make a better mimic of my voice with today’s technology, but they would probably use a script that included a handoff to another person: "Mom, I can't talk right now, but I was just arrested, and I need you to talk to this lawyer since they are processing me."
There is technology that can help restore trust in our technology-mediated communication systems using digital signatures, but it will take a long time for the infrastructure, standards, and policy to catch up.
For now, we are at an odd juncture in the digital transformation of our planet, a period in which we all live in a "zero-trust" world, whether we recognize it or not. If we want to identify digital mimics that might be slithering into our living space, we will have to learn to be comfortable with verifying the identity of our colleagues, friends, and even enemies.
Just like animals that have specific calls or behaviors to identify members of their own species or group, you can develop a code word, or challenge question that only you and your family or office mates know. Challenging someone’s identity is not in our nature, but it needs to be the default for anyone who uses technology to receive or communicate financial orders or transfers.
Practicing "digital camouflage,” by limiting the amount of our personal information that is available online, can make it harder for scammers to create convincing deepfakes or impersonations. While you might not be able to make yourself “unfakable” you can certainly make yourself harder to target.
Here are some guides if you want to learn more.
National CyberSecurity Alliance: How to Protect Yourself Against Deepfakes
The Statement: Keeping up with scammers: Deepfake voice fraud
Kaspersky: Deepfake and Fake Videos - How to Protect Yourself?
Nextcloud: How to protect yourself against deepfake scams in video calls
Note: You may have noticed that this post relies heavily on analogies to the biological world. I have been learning how to take inspiration from the biological world by (re)reading the book “Bioinspired Strategic Design” by Daniel J Finkenstadt and Tojin T. Eapen. You can order and find more information here.
My commentary may be republished online or in print under Creative Commons license CC BY-NC-ND 4.0. I ask that you edit only for style or to shorten, provide proper attribution and link to my contact information.
📥Recent Talks, News and Updates
I gave a talk for the League of Women Voters and Danial Boone Library: Before You Vote: Artificial Intelligence, the Elections and Civic Dialogue.
The University of Missouri is suggesting up a number of experts on AI for chatting with journalists and podcasters, including me!
📆 Upcoming Talks/Classes 👨🏫
I will be talking at the Workshop on Emerging Technologies for Digitalization at the Asia-Pacific Economic Cooperation meeting in Lima, Peru, on August 12 (TODAY). More information and will be available on the APEC Peru website.
I will be presenting “Managing the Learning Machine” at 8:00 AM on September 10th for the MU Retiree’s Association (In Person and Zoom). More information and Registration will be available on MU Retiree’s Association website.
My friend and colleague, Sophia Rivera Hassemer, is teaching “Technology Potpourri” for Osher on Sept 12, 19, 26, and Oct 3 from 9:30 to 11am, and I will be her assistant! It will be in person only at the Moss building, and will be very hands on with technology. More information and Registration will be available on the Osher website.
I will give a talk on Artificial Intelligence and The Elections on Tuesday, September 10, 6:30pm - 8:00pm at the Missouri River Regional Library in Jefferson City. More information is available on the Missouri River Regional Library website.
I will present “Harnessing AI for Nonprofit Growth” from
10:45 - 11:45 a.m., on November 7 via zoom. More information and Registration will be available on the New Chapter Coaching website.I will present “AI: Current Trends and Future Directions” for the Mid-Missouri PMI Chapter on November 12th at 7:30am via zoom. Registration will be available on PMI Mid-MO Chapter's website.
I love the analogy! You write: "While you might not be able to make yourself “unfakeable” you can certainly make yourself harder to target." As long as there are people who are easily fakeable, fraudsters are unlikely to target those who make themselves harder to target. Kinda like pickpockets in Amsterdam.....just make it challenging.