Saturday 24 October 2020

053 – Is seeing believing? Deepfakes and the information apocalypse

Nguồn tin: nguontinviet.com

When you watch the news these days, do you trust your eyes and ears? Do you think what you’re seeing is real and happened the way it is being shown? Or is your first reaction to think: Hmm, I wonder if this video is fake? That’s what today’s episode is about, so stay tuned.

TRANSCRIPT PREVIEW – Get the full transcript here: https://ift.tt/3jxNmFq

Before we get started, I hope you’ll indulge me in a little Better at English background info. I don’t do Better at English for the money, but some of you have been going out of your way to send me thank-you gifts. So thank you so much to Charles for his very generous Paypal donation, and to the mystery person who sent me the Handbook of Self-Determination research from my Amazon wish list. I honestly didn’t know that it was even possible to find my Amazon wish list anymore, so getting a mystery book delivered was a real surprise! I’d also like to thank Zhuo Tao (I hope I’m saying that right) who wrote my favorite review this month: It goes like this “This podcast is getting better and better by every episode. It’s no longer just some language learning material, but food for thought as well.” That is indeed what I’m trying to do, so that was really nice to get that feedback.

You can say thanks here: https://ift.tt/34o8BF2

All feedback from you lovely listeners, whether it’s reviews, email, voice messages, donations, Amazon wish list gifts…it’s all positive feedback that fuels my motivation to keep doing this…it’s a sign that you’re getting value from the episodes, which is what it’s all about. So thank you!

OK, thank you for indulging me…let’s get on with today’s topic.

Deepfakes and the Information Apocalypse
Today we’re looking at misinformation and disinformation in our modern age, and how technologies like deepfake are making it increasingly harder for us to know what is really happening in the world, to separate fact from fiction. This episode builds on my earlier episodes about AI—that’s artificial intelligence—which you can find further down the podcast feed as episodes 47 and 48.

Before we go any further, take a moment to ask yourself how much you trust what you see, hear, and read these days, whether it’s online, in a newspaper, or coming from an expert or politician in a live televised address. Is seeing believing, as the expression goes? Go ahead, think of some recent examples that are personally relevant to you. Now ask yourself how your beliefs about what is true influence your actions, how much they shape what you actually do as you move through life. How do these beliefs influence, for example, who you vote for, what you buy, what you eat, which books you read, which car you drive?

You don’t need to be a Ph.D. in psychology to understand that our beliefs about what is true or false affect our actions. Nobody wants to make decisions based on lies or misinformation, so we all want information that we can trust. Just to give a current example, look at what’s happening regarding masks and the Corona-virus. If you think masks do help stop the spread and protect others, you’re likely to wear one even though they are uncomfortable and it’s kind of a pain in the butt. And if you think masks don’t help at all, you are more likely to resist wearing a mask or even flat out refuse. I mean, why bother if they don’t work, right? And if you have really strong beliefs about this, you might even march in protest against the rules that require you to wear a mask. The point is, your chosen path will be based on what you believe is right and true.

We are living in a pretty crazy time right now, and humanity is facing huge challenges. And it’s no secret that many of the big issues are extremely polarizing. And if you try to build an informed opinion by examining the information and arguments of both sides, you make a frustrating discovery, or at least I did:
Both sides seem to have completely different interpretations of facts and reality. And each side believes it sees things correctly and the other side is hallucinating. Or crazy. Or just plain evil.

To borrow an analogy from the author Scott Adams, it’s as if we’re all watching the same movie screen, but we’re seeing two completely different movies at the same time. And each of us is convinced that our movie is the truth.

This “two movies on one screen” phenomenon is already happening with events that we all can agree really happened. We might not agree on what these events mean, who is responsible, what should be done about them etc., but we basically accept that they actually occurred in the physical world. Seeing is believing, right?

But what happens when the things we are seeing and hearing, the video, audio, and photos are 100% fake? And what happens when these fakes are everywhere? If we can have such serious disagreements, such polarization about genuine, real events, what is going to happen when we truly can’t be sure if the media we are seeing and hearing is real?

Some experts think that this is going to happen really soon. We are entering the era of deepfakes. If you’re not sure what a deepfake is, you will understand it by the end of this episode.

You are going to hear experts discussing this topic in English, and I’ll pop in from time to time to guide you through the examples.

The link to the full transcript of this episode is in the show notes, and there are also links to the audio you hear and the examples that the speakers mention, like videos, websites, books, etc. So if you find this topic interesting, there is plenty of supplementary material so you can learn more.

All right, let’s get started. First of all, what is deepfake?

Nina
So a deep fake is a type of synthetic media. And what synthetic media essentially is, is any type of media, it can be an image, it can be a video, it can be a text, that is generated by AI.

Lori
That was Nina Schick, who is, to put it mildly, a pretty impressive woman with a very interesting background:

Nina
I’m half German, and I’m half Nepalese. And so I’ve this background in geopolitics, politics, and information warfare. And my area of interest is really how the exponential changes in technology, and particularly in AI are rewriting not only politics, but society at large as well.

Lori
She’s also proficient in 7 languages. All I can say is, wow. You’ll now hear Nina talking to Sam Harris about deepfakes. It’s from an episode of Sam Harris’s podcast “Making Sense,” which is another great podcast for you to add to your list of interesting podcasts in English. Here we go:

Sam
So much of this is a matter of our entertaining ourselves into a kind of collective madness, and what seems like it could be a coming social collapse, I realized that if you’re not in touch with these trends, you know, if anyone in the audience who isn’t this kind of language coming from me, or anyone else can sound hyperbolic. But we’re really going over some kind of precipice here, with respect to our ability to understand what’s going on in the world, and to converge on a common picture of a shared reality. [EDIT] And again, we built the, the very tools of our derangement ourselves. And in particular, I’m talking about social media here. So your book goes into this. And it’s organized around this, this new piece of technology that we call deepfakes. And the book is Deepfakes: The Coming Infocalypse, which umm, that’s not your coinage, it…on the page is very easy to parse. When you say it, it’s hard to understand what’s being said there, but it’s really, you’re talking about an information apocalypse. Just remind people what deepfakes are, and suggest what’s at stake here in terms of, of how difficult it could be to make sense of our world in the presence of this technology.

Nina
This ability of AI to generate fake or synthetic media is really, really nascent. We’re only at the very, very beginning of the synthetic media revolution. It was only probably in about the last four or five years that this has been possible. And for the last two years that we’ve been seeing how the real-world applications of this have been leaching out from beyond the AI research community. So the first thing to say about synthetic media is that it is completely going to transform how we perceive the world, because in future, all media is going to be synthetic, because it means that anybody can create content to a degree of fidelity that is only possible for Hollywood studios right now, right? And they can do this for little to no cost using apps or software, various interfaces, which will make it so accessible to, to anyone. And the reason why this is so interesting.

Nina
Another reason why synthetic media is so interesting is until now, the best kind of computer effects CGI, do you still can’t quite get humans right. So when you use CGI to do effects where you’re trying to create robotic humans, it still doesn’t look right…it’s called, you know, uncanny valley. But it turns out that AI when you train your machine learning systems with enough data, they’re really really good at generating fake humans or synthetic humans, both in images, I mean, and when it comes to generating fake human faces, so images, still images, it’s already perfected that and if you want to kind of test that you can go and look at thispersondoesnotexist.com. Every time you refresh the page, you’ll see a new human face that to the human eye, to you, or, or me, Sam, we’ll look at that and we’ll think that’s an authentic human, whereas that is just something that’s generated by AI. That human literally doesn’t exist. And also now increasingly in other types of media like audio, and film. So I could take essentially a clip of a recording with you, Sam, and I could use that to train my machine learning system and then I can synthesize your voice so I can literally hijack your biometrics, I can take your voice, synthesize it, get my AI kind of machine learning system to recreate that. I can do the same with your digital likeness.

Obviously, this is going to have tremendous commercial applications; entire industries are going to be transformed. For example, corporate communications, advertising, the future of all movies, video games. But this is also the most potent form of mis- and disinformation, which are democratizing for almost anyone in the world at a time when our information ecosystem has already become increasingly dangerous and corrupt. [Edit] So we have to distinguish between the legitimate use cases of synthetic media and how we draw the line. So I very broad brush in my book say that the use of and intent behind synthetic media really matters and how we define it. So I refer to deepfake, as when a piece of synthetic media is used as a piece of mis- or disinformation. And, you know, there is so much more that you could delve into there with regards to the kind of the ethical implications on the taxonomy. But broadly speaking, that’s how I define it and that’s my definition between synthetic media and deep fakes.

Sam
Hmm. Well, so umm, as you point out, all of this would be good, clean, fun if it weren’t for the fact that we know there are people intent upon spreading misinformation and disinformation and doing it with a truly sinister political purpose. I mean, not not just for amusement, although that can be harmful enough. It’s it’s something that state actors and people internal to various states are going to leverage to further divide society from itself and increase political polarization. But it would, it’s amazing that it is so promising in the fun department that we can’t possibly even contemplate putting this cat back in the bag. I mean, it’s just, that’s the problem we’re seeing on all fronts. It is, so it is with social media. So it is with the, the ad revenue model that is selecting for so many of its harmful effects. I mean, we just can’t break the spell wherein people want the cheapest, most fun media, and they want it endlessly.

And yet the, the harms that are accruing, are so large that it’s, it’s amazing. Just to see that there is just no there’s no handhold here, whereby we can resist our slide toward the precipice. Just to underscore how quickly this technology is developing. In your book, you point out what happened with the…once Martin Scorsese released his film, The Irishman which had this exceedingly expensive, and laborious process of trying to DE-age its principal actors, Robert de Niro and Joe Pesci. And that was met with something like derision for the the imperfection of what was achieved there — again, at great cost. And then very, very quickly, someone on YouTube, using free software, did a nearly perfect de-aging of the same film. [You can see it here: https://www.youtube.com/watch?v=dHSTWepkp_M ] It’s just amazing what what’s happening here. And, again, these tools are going to be free, right? I mean, they’re already free and and ultimately, the best tools will be free.

Nina
Absolutely. So you already have various kind of software platforms online. And so the barriers to entry have come down tremendously. Right now, if you wanted to make a convincing deepfake video, you would still need to have some knowledge, some knowledge of machine learning, but you wouldn’t have to be an AI expert by any means. But already now we have apps that allow people to do certain things like swap their faces into scenes, for example, Reface I don’t know if you’ve come across that app. I don’t know how old your children are. But if you have a teenager you’ve probably come across it. You can basically put your own face into a popular scene from a film like Titanic or something. This is using the power of synthetic media. But experts who I speak to on the generation side — because it’s so hugely exciting to people who are generating synthetic media — think that by the end of the decade, any YouTuber, any teenager, will have the ability to create special effects in film that are better than anything a Hollywood studio can do now. And that’s really why I put that anecdote about the Irishman into the book because it just demonstrates the power of synthetic media. I mean, Scorsese was working on this project from 2015. He filmed with a special three-rig camera, he had this best special effects artists, post-production work, multi-million dollar budget, and still the effect at the end wasn’t that convincing. It didn’t look quite right. And now one YouTuber, free software, takes a clip from Scorsese’s film in 2020. So Scorsese’s film came out in 2019. This year, he can already create something that’s far more…when you look at it…looks far more realistic than what Scorsese did. This is just in the realm of video. As I already mentioned, with images, it can already do it perfectly. There is also the case of audio. There is another YouTuber, for example, who makes a lot of the kind of early pieces of synthetic media have sprung up on YouTube. There’s a YouTuber called Vocal Synthesis, who uses an open-sourced AI model to train a…trained on celebrities voices

END TRANSCRIPT PREVIEW — You can find the full transcript here

Material used in this episode

The Making Sense Podcast with Sam Harris
Episode #220 The Information Apocalypse: A Conversation with Nina Schick
https://ift.tt/2HaTfuG

Deepfakes: Is This Video Even Real? Claire Wardle of the New York Times
https://www.youtube.com/watch?v=1OqFY_2JE1c

Vocal Synthesis Youtube Channel – https://www.youtube.com/channel/UCRt-fquxnij9wDnFJnpPS2Q
6 presidents read the Twilight Zone introhttps://www.youtube.com/watch?v=B2HlDk-u1hQ

Donald Trump reads the Darth Plagueis copypasta – https://www.youtube.com/watch?v=LEzIAixNkFI

Supplementary Material

Nina Schick’s Book: Deepfakes: The Coming Infocalypse

Deepfakes: A threat to democracy or just a bit of fun?
https://www.bbc.com/news/business-51204954

The Irishman – de-Aging of Robert de Niro
https://www.youtube.com/watch?v=dHSTWepkp_M

Here’s why deepfakes are the perfect weapon for the ‘infocalypse’ – By Nina Schick

https://lifestyle.livemint.com/smart-living/innovation/here-s-why-deepfakes-are-the-perfect-weapon-for-the-infocalypse-111602247119544.html

Deepfakes: How to prepare your organization for a new type of threat
https://www.accenture.com/nl-en/blogs/insights/deepfakes-how-prepare-your-organization

A deepfake porn bot is being used to abuse thousands of women
https://www.wired.co.uk/article/telegram-deepfakes-deepnude-ai

Deepfake video of Vladimir Putin
https://www.youtube.com/watch?v=sbFHhpYU15w

Deepfake video of Kim Jong-Un
https://www.youtube.com/watch?v=ERQlaJ_czHU

Access Hollywood tape with Donald Trump and Billy Bush (2016) – vulgar, profanity, not safe for work
https://www.youtube.com/watch?v=FSC8Q-kR44o

Actress in Trump’s ‘Access Hollywood’ Tape reacts to Trump’s claim that he’s not sure he “actually said that”.
https://www.youtube.com/watch?v=uRIPFJcPbq4


Đăng ký: Hoc tieng anh
Nguồn tin

1 comments:

dacsotakach said...

CASINO LODDING at JANTUCKY BANKS
JANTUCKY BANKS AT JANTUCKY BANKS · 1. JANTUCKY 서산 출장샵 BANKS 대전광역 출장마사지 AT 익산 출장안마 JANTUCKYBANKS · 2. JANTUCKY BANKS 시흥 출장샵 AT JANTUCKYBANKS · 3. 세종특별자치 출장안마 JANTUCKY BANKS AT JANTUCKYBANKS · 4. JANTUCKY BANKS AT JANTUCKYBANKS · 5. JANTUCKY BANKS AT

Post a Comment

Translate

Twitter Delicious Facebook Digg Stumbleupon Favorites More

 
Design by Free WordPress Themes | Bloggerized by VN Bloggers - Blogger Themes