Artificial intelligence (AI) has proven useful in the fight against COVID-19. It’s been helpful in amalgamating research to highlight potential areas of research. Authorities have used it for contact tracing, and it’s been investigated as a diagnostic tool as well. Unfortunately, however, AI is also playing a role in burdening the health industry at a time when it needs all the help it can get.
In this post, we’ll look at how deepfakes could place an additional strain on overburdened health resources.
What Are Deepfakes?
Deepfakes is a term coined to describe the content, mostly images and videos, altered by AI. The AI uses an existing piece of content as a reference. The software maps different features and points within the content.
The person using the software creates a new piece of content. The AI then alters the new content so that it matches the features within the original piece as closely as possible. The goal is to superimpose the new content over the old, matching the different points so that the new content appears almost seamless.
We most commonly see this technology being used to superimpose a new face over old footage or to make someone appear to say something that they never actually said.
What’s the Point?
The point is that you can use this technique to create convincing footage. The movie industry, for example, uses this technology to fill in the background of certain footage, or to superimpose the face of a deceased actor over already filmed content.
This technology was used, albeit relatively ineffectively, to allow a young Princess Leia to make an appearance in Star Wars: Rogue One. While fans called a foul on that one, it showed that deepfakes did have potential on a movie set.
Why Could This Prove Dangerous?
Of more concern, however, is the use of deepfakes to spread fake news. This clip of Barack Obama produced by Buzzfeed and starring actor Jordan Peele highlights the potential danger. The video was created to highlight exactly how deepfake technology could be used to spread false news.
If you watch the entire video with rapt attention, you’ll notice that something isn’t quite right. But if all you see is the clip that’s been altered, you likely wouldn’t realize it’s not real. Just a couple of seconds of footage played across your phone will seem pretty convincing at first watch.
Therein lies the danger. Fake videos like these have been used to discredit public figures and to spread disinformation.
Why Are Deepfakes Dangerous for the Healthcare Industry?
First and foremost, fake news is dangerous in itself simply because it spreads misinformation. Fake videos purporting that alcohol, excessive heat, or excessive cold kills the Coronavirus, for example, have already proved damaging. There’s no scientific evidence to support any of these claims, and yet they all went viral.
Another area where misinformation might be dangerous is in how it affects the actions and reactions of people to control measures instituted by regulators. When anti-quarantine protestors blocked access to hospitals, preventing healthcare personnel from getting to and from work, it was based on a legitimate movement, not deepfakes. But it’s not hard to see someone harnessing the power of the technology to create a repeat performance of the chaos.
What’s Worse Than Fake News for the Healthcare Industry?
What’s proving most damaging for the healthcare industry, perhaps, is that criminals are using deepfakes as clickbait for sites loaded with malware. Fake videos have the power to go viral, particularly when everyone is panicking about the virus.
Cybercriminals are harnessing the power of this technology to reel in more victims. The typical modus operandi is that they post a thumbnail of the video with a link. The thumbnail and heading make the victim really curious about the content of the video, so they click through to it.
As soon as they navigate to the site, their computer is exposed to malware such as ransomware, keyloggers, or spyware. If they don’t have adequate cybersecurity in place, their computer is infected and they have to deal with the fallout.
Now, most people know better to click through to an unknown site. But that doesn’t stop bad actors from issuing more targeted attacks.
Consider this scenario for a moment. You receive a message seemingly from a reputable organization, like the World Health Organization, or regulators in your country regarding contact tracing for COVID-19. They display a thumbnail image of your front desk, with a blurry image of what looks like one of your colleagues. Wouldn’t you be tempted to check the link?
Final Notes
Bad actors use a range of different tactics to get their victims to click through to malicious links or give up access information. While some hackers and ransomware groups have declared the healthcare industry off-limits during this crisis, others are taking full advantage of the situation.
What hospital can afford to have ransomware clogging up its vital systems during this time? How many can risk the fallout of the names of COVID-19 positive patients being leaked? Deepfakes are just one more thing placing more stress on already-overburdened healthcare systems.
Note: This blog article was written by a guest contributor for the purpose of offering a wider variety of content for our readers. The opinions expressed in this guest author article are solely those of the contributor and do not necessarily reflect those of GlobalSign.