Can Facebook And Google Detect And Stop Deepfakes?

Can Facebook And Google Detect And Stop Deepfakes?

Deepfakes have started
to appear everywhere. From viral celebrity face swaps to
impersonations of political leaders, it can be hard to spot the
difference now between real and fake. We’re entering an era in which our enemies
can make it look like anyone is saying anything at any point in time. And the digital impressions are
starting to have real financial repercussions. In the U.S., an audio deepfake of a CEO reportedly
scammed one company out of 10 million dollars. In the UK, an energy
firm was tricked into a fraudulent transfer of 220,000 euros. And with the 2020 election not far
off, there is huge potential for weaponize deepfakes on social media. Now tech giants like Google, Twitter,
Facebook and Microsoft are stepping up. With Facebook spending more than
10 million dollars to fight deepfakes, what’s at stake for businesses
and what’s being done to detect and regulate them? One of the
most well known deepfake creators is Dr. Fakenstein, or Jeff White. He walked us through the process. So let’s say we’ve got
source footage of Trump. I am concerned I wouldn’t want
to see a violent crackdown. And I have my destination
footage: The Little Rascals. This column here is Trump’s
face over the Little Rascals. Essentially a deepfake is an A.I. generated video or audio clip of a
real person doing and saying fictional things. A computer uses its deep
neural network, hence the term deepfake to learn the movements or sounds
of two different recordings and combine them in a realistic way. And there is an emerging
market for creators of deepfakes. I’ve had hundreds of work requests since
I put out some of my videos. Most people just want to have a
laugh and be entertained by it. White creates videos for
Jimmy Kimmel Live. Ladies and gentlemen, all rise for
Brooklyn Heights. And next month, he’s quitting his day job at a dairy
farm to create deepfakes full time. A handful of well known creators like
him use it solely for satire. It can be used for
more publicly available fun, too. Like with Berkeley’s Everybody
Dance Now project. The technology is nothing new. It’s part of how major Hollywood
studios include actors in critical roles after they’ve died. Lord Vader
will handle the fleet. It helps gaming companies let
players control their favorite athletes. What is new: the process
has become cheaper and easier. Towards the beginning of 2018 then a
couple of important projects, one of them was known as FakeApp, the other
was known as Faceswap, became readily available. And in the last year, year
and a half, the pace of innovation around the creation of deepfakes
has accelerated quite a bit. And you can just go online and
you can find tools, download them. All I have to do to create a deepfake
at this point in time is to click 10 times. Whoa! It’s so easy that Symantec made a
deepfake of me without much notice before our interview. The amount of material
that you need to create these things is 10 to 20 seconds of
video and maybe a minute of audio. This wouldn’t even be happening. This is something that takes about, you
know, not a lot of source material. But just imagine if you had
hours and hours and hours of video at hand, we could create a really
good likeness of you from different angles, saying different things. So what’s at stake for companies or
individuals when seeing or hearing is no longer believing? The fact that
people can see something like this, believe that it’s true and collectively the
markets can react to it are a huge concern for people. The newest attack that we are seeing,
which people had not anticipated, in some sense is the use of the
audio deepfakes to cause financial scams. I would like to transfer
five thousand dollars to Adam. In the last few months, security
firm Symantec says it’s seen three attempted scams at companies involving
audio deepfakes of top executives. In one case, the company lost 10
million dollars after a phone call where a deepfake audio of the CEO
requested a transfer of the money. The perpetrators still
hasn’t been caught. Here’s a station you might like. It’s no different than what you would
see for like what powers like Alexa or some other products like that. And the CEO will answer whatever question
you have because they can be created on the fly. But deepfakes
also offer profit opportunities to some companies. You’d have an actor that
would license their likeness and then at a very low cost, the studio
could produce all kinds of marketing materials with their likeness without having
to go through the same level of production that they
do today, you know. And so I can imagine lots and lots
of audio being produced using the voice of an actor, the voice of someone
other VIP and all of that being monetized. Just like we practiced. Ready? Now Amazon is doing just that. Today in Los Angeles,
it’s 85 degrees. Say my name. Woohoo. It announced in
September 2019 that Alexa devices can speak with the voice of
celebrities like Samuel L. Jackson. On Instagram, A.I. generated influencers like lilmiquela are
backed by Silicon Valley money. The deepfake videos of these virtual
influencers bring in millions of followers, which means a lot of
potential revenue without having to pay talent to perform. And in China,
a government backed media outlet introduced a virtual news anchor who
can work 24 hours a day. I will work tirelessly to keep you informed
as texts will be typed into my system uninterrupted. But the potential for misuse is high. So one of the most insidious uses of
deepfakes is in what we call revenge porn or pornography that somebody puts out
to get back at somebody who they believe wronged them. This
also happens with celebrities. But certainly things like this that
would ruin the reputation of a celebrity or somebody else in the public eye
are going to be top of mind for these social media companies. Also top of mind for social
media companies: the 2020 elections. Researchers expect that in the 2020
election, deep fakes will probably be deployed. Will it be deployed by
a foreign nation looking to cause instability? That’s possible. And that could be significant. In that case, you would have
a candidate saying something totally outrageous. I own one pair of underwear. Or something that enflames the markets
or something that puts their chances of being elected in question. There is also a concern about faking
words from leaders of countries, from leaders of organizations like the IMF
that would have a significant consequence, even if it was short term,
on markets and even on global stability in terms of conflict. That he was engaged in a cover up. In May, House Speaker Nancy
Pelosi accused Facebook of allowing disinformation to spread when the company
refused to take down a manipulated video of her. In response, Facebook updated its
content review policies, doubling down on its refusal to remove deepfakes. Two British artists tested Facebook resolve
by posting a deepfake of CEO Mark Zuckerberg on Instagram. Whoever controls the data
controls the future. Facebook held its ground, refusing to
remove it along with other deepfakes like those featuring Kim
Kardashian and President Trump. I pulled off the biggest heist of
the century and people just have no idea. Now there’s a question,
though, about whether misinformation, whether these deepfakes are actually
just a completely different category of thing from normal kind
of false statements overall. And I think that there’s a
very good case that they are. Now Facebook is trying to get ahead
of deepfakes before they make it on its platforms. It’s spending more than
10 million dollars and partnering with Microsoft to launch a Deepfake Detection
Challenge at the end of the year. Facebook itself will create deepfakes with
paid actors to be used in the challenge. Then pre-screened participants
will compete for financial prizes to create new open source
tools for detecting which videos are fake. Twitter told CNBC it challenges eight to
10 million accounts per week for policy violations, which includes the use
of Twitter to mislead others. As a uniquely open service,
Twitter enables the clarification of falsehoods in real time. We proactively enforce policies and use
technology to halt the spread of content propagated through
manipulated tactics. And it recently acquired a
London-based startup called Fabula A.I., which has a patented A.I. system it calls geometric deep learning
that uses algorithms to detect the spread of misinformation online. At YouTube, which is owned
by Google, community guidelines prohibit deceptive practices and videos are
regularly removed for violating these guidelines. Google launched a program last
year to advance detection of fake audio specifically, including its
own automatic speaker verification spoof challenge, inviting researchers
to submit countermeasures against fake speech. One small cybersecurity company
has already launched an open source tool that’s helping create
algorithms to detect deepfakes. The way our platform works is we’re
pulling in billions of pieces of content on a monthly basis. Text, images, video, all
kinds of stuff. And so in this case, as the
video flows through our platform, we’ll now route it through deepfake detection
that says like, deepfake, not deepfake, and if it is
a deepfake, alert our customers. Baltimore based ZeroFOX intends to be the
first to have customers pay to be alerted of deepfakes. Meanwhile, academic institutions and the
government are working on other solutions. Another approach is to put
a registry out there where people can register their authentic content and
then other people can check with the registry to see if that content
is in fact, authentic or not. The Pentagon’s research enterprise,
called Defense Advanced Research Projects Agency, or DARPA, is fighting
deep fakes by first creating its own and then developing technology
that can spot it. The people who are defending us
against deepfakes are using A.I. just as much as the people who
are creating them are using A.I. It’s just that those who are creating
deepfakes seem to have a running start on this. I don’t think that
there’s a silver bullet yet developed. The technology is really only
one or two years old. And so we’re at an early stage. And at least until Facebook
announced monetary prizes, the business potential on the detection
side was small. There’s not really a market segment
for deepfake protection that’s mature yet. The tech is new. The threat landscape is just
beginning to emerge or whatever. So we’re the first or amongst the
first in terms of companies to develop and ship the technology around this. One of the best things about the Facebook
challenge is that it brings in a lot of people who probably weren’t
interested in this technology to try and work on it, and I think what
we really need in the space with deepfakes is finding something that is
novel that we haven’t thought of before that works
for detecting these. Even if you can detect deepfakes,
there is currently little legal recourse that can be taken to stop them. For the most part, they are legal
to research and create at this point. Can you do an impression of
him? It’s going to be great. For the most part, if you’re a
public person, you really don’t own the rights to your public appearances or
even videos taken without your consent in public. I genuinely love
the process of manipulating people online for money. There are some things
that are unclear about the law, but for the most part, this also
applies to just regular people who post videos of themselves on
Facebook and YouTube. But if your image is used in
adult content, it’s likely illegal in states like California, where revenge porn is punishable
by up to six months in jail and a $1,000 fine. So a lot of porn websites, for
example, have declared that they are not going to allow hosting of deepfake
or uploading of deep fake-based porn. You can call me Sam. In China, a
deepfake app that allowed users to graft their faces onto popular
movie clips went viral. But it was shut down earlier this
month over privacy concerns because the app maintained the rights
to users’ images. In June, New York Congresswoman
Yvette Clarke introduced the DEEPFAKES Accountability Act in the House. It would require creators to disclose
when a video was altered or generated and allow victims to sue. There is no way that a law
like this will enforce a lawsuit against somebody who is sitting in a country
in Eastern Europe or anywhere else across the globe that has already proven
to be hostile to the U.S. when it comes to enforcing
our laws around cybersecurity. I don’t like people passing
off videos as real. My stuff’s clearly satire. I don’t think anyone is
mistaking my stuff for real. Satirical deepfake creators like Dr. Fakenstein take ownership of their work
and are even monetizing it. But that’s different for
those with malicious intent. If you’re intent on publishing a deepfake
and not having it traced back to you, there are plenty of ways
that you can remain anonymous. For better or worse, deepfakes
are only getting more refined. The challenge will be whether the
technology to detect and prevent them can keep up. I think that the
people who are creating deepfakes for nefarious reasons are way
ahead of us. I think that they have access to A.I. that is more advanced than what we
have working on the solution side and certainly access to more resources than we
have so far given people to fight against the problem. Hopefully that will change with
what’s taking place now. May sound basic, but how we move
forward in the age of information, is going to be the difference between whether
we survive or whether we become some kind of dystopia. God bless you, your families, our
children, and God bless the United States of America.

62 thoughts on “Can Facebook And Google Detect And Stop Deepfakes?

  1. This is FAKE NEWS. Facial recognition, audio recognition, and consumer level fingerprint scanning will be reliable gatekeepers in the next 15 years or so but not right now. The technology needs more time to be perfected. Maybe in the next 15 years people won't even carry cell phones in their pockets. Instead, EVERYONE will have a "smart watch" like Apple's and their cell phones will just be used as a device manager. I don't think anyone will even use a bluetooth headset by that time. They will probably just turn the watch volume all the way up and use speaker phone. People already do this with their car stereos. They connect the cell phone to it and blast the whole conversation for the world to hear.

  2. Now this makes me question how valid video and audio evidence or even confessions are valid in court anymore. Because at this point basically every piece of evidence can be faked, and if your in a country that uses shady criminal investigation methods…..

  3. Now that's a garGANtuan task for the tech companies. Only people familiar with AI algorithms understand the inner meaning of this comment:P

  4. Let put like this: A deepfake detecting software is also a deepfake trainer
    The idea that you can create a software that can detect deep fakes, and that it will not be use to impruve deep fakes are so dumb that i can't even start on how dumb it is…

  5. "DeepFake detection programs"? Do you really not understand, how deep neural networks work? By creating very accurate DeepFake detector you will help DeepFakes become EVEN MORE realistic. Because this is not just a simple algorithm – this is an AI, that can learn by itself. The DeepFake detecor can be used as a dictiminator in a Generative Adversarial Network, so the generator will learn how to create DeepFakes, that can fool even this program. One day DeepFakes will become COMPLETELY indistinguishable from real videos / photos. And by creating this kind of programs, you are helping this day to come MUCH sooner. Maybe we already have only 10 years left.

  6. simple- every information can be stamped with blockchain like token to prove source.
    in fact a coin can be made with it's value tied to verifiability of source information

  7. The most cheaper solution is to put a yellow sticker on the face of celebrities and change the Color gradually. I think it will glitch the deep fake.

  8. they said radio was dangerous,
    they said television was dangerous,
    they said computer was dangerous,
    they said photoshop was dangerous

  9. theirs already a white paper out, 99% detection rate. but its not from an american, so im sure our government will wait till they can give the money to an american related to some politician, because this country is the country of nepotism.

  10. Perfect way to report your income as "stolen".
    Those 10 million weren't stolen. They were hidden to not pay taxes.

  11. The better deep fakes can be detected, the better the deep fakes will get. There is no keeping up with it. It's about time people stop taking others so seriously and grow up and not be offended. For all we know, some of the most well known people are deep faked.

  12. preventing deepfakes is like banning plastic face surgery, transgender. Specially transgender because it's like frauding identity.

  13. Now that it becomes to easy to fake faces and voices, nobody will trust anybody until people meet in reality. So eventually this will kind of makes us go backwards in time because we won't be relying anymore on this effortless global communication system.

  14. Meanwhile, a few weeks ago, a bank offered to allow me to log in to my account over the phone using my voice as a password. Over a compressed phone line, no less.

  15. I think the real question is can we trust Facebook and Google to tell us if it is a Deepfake or not.
    "Google & Facebook say it's true, so it must be. Besides, I saw it with my own eyes on Youtube."

  16. The problem is that those who create deep fakes Also use Deep A.I to improve the deep fakes themselves, not to mention always improving how deep fakes work and learn.

    At the moment, a human can, if you look closely at such things as a persons mouth when speaking, can be seen with some ease.
    This will, of course, become so much more difficult over time.

    Then there are the deep fake voices which listen to a persons voice many times and when it has learned enough, a person can they type what they wish the person to say and it sounds exactly like that person speaking.
    Over time i'm sure that will become something where one persons voice can then be translated into another.

    This kind of technology has its good points and good uses but as with anything, it can be used for bad things too.

    What we are seeing is something i have been saying all alone when it comes to A.I.
    It won't be something we have to fear as it will be A.I fighting against or competing against other A.I for survival or dominance.

    We are still far from A.I can think as we do and be better than we are in many ways.
    I won't go into those now as this comment is already way longer than i'd wished it to be.
    There are ways that it could be very much like a human, even alive but there are still mistakes and things people aren't thinking about too and i won't mention them for many reasons.

  17. We all know those who have the antidote are the ones creating the virus in the first place…deepfake video creators probibly have the software that can identify it.

Leave a Reply

Your email address will not be published. Required fields are marked *