Enunciado
"Of course they're fake videos, everyone can see they're not real. All the same, they really did say
those things, didn't they?" These are the words of Vivienne Rook, the fictional politician played by
Emma Thompson in the brilliant dystopian BBC TV drama Years and Years. The episode in question,
set in 2027, tackles the subject of "deepfakes" videos in which a living person's face and voice are
digitally manipulated to say anything the programmer wants.
Rook perfectly sums up the problem with these videos even if you know they are fake, they leave a
lingering impression. And her words are all the more compelling because deepfakes are real and among
us already. Last year, several deepfake porn videos emerged online, appearing to show celebrities such
as Emma Watson, Gal Gadot and Taylor Swift in explicit situations.
[...]
In some cases, the deepfakes are almost indistinguishable from the real thing which is particularly
worrying for politicians and other people in the public eye. Videos that may initially have been created
for laughs could easily be misinterpreted by viewers. Earlier this year, for example, a digitally altered
video appeared to show Nancy Pelosi, the speaker of the US House of Representatives, slurring
drunkenly through a speech. The video was widely shared on Facebook and YouTube, before being
tweeted by President Donald Trump with the caption: "PELOSI STAMMERS THROUGH NEWS
CONFERENCE". The video was debunked, but not before it had been viewed millions of times.
Trump has still not deleted the tweet, which has been retweeted over 30,000 times.
The current approach of social media companies is to filter out and reduce the distribution of deepfake
videos, rather than outright removing them unless they are pornographic. This can result in victims
suffering severe reputational damage, not to mention ongoing humiliation and ridicule from viewers.
"Deepfakes are one of the most alarming trends I have witnessed as a Congresswoman to date," said
US Congresswoman Yvette Clarke in a recent article for Quartz. "If the American public can be made
to believe and trust altered videos of presidential candidates, our democracy is in grave danger. We
need to work together to stop deepfakes from becoming the defining feature of the 2020 elections."
Of course, it's not just democracy that is at risk, but also the economy, the legal system and even
individuals themselves. Clarke warns that, if deepfake technology continues to evolve without a check,
video evidence could lose its credibility during trials. It is not hard to imagine it being used by
disgruntled ex-lovers, employees and random people on the internet to exact revenge and ruin people's
reputations. The software for creating these videos is already widely available.
Fonte: Curtis, Sophie. https://www.mirror.co.uk/tech/deepfake-videos-creepy-new-internet-18289900.
Adaptado. Acessado em Agosto/2019.
De acordo com a congressista Yvette Clarke, pelos diversos riscos representados pelos
vídeos deepfake, é necessário
Alternativas
- A)
bani-los totalmente da Internet.
- B)
proibir e criminalizar seu compartilhamento.
- C)
impedir sua presença nas eleições futuras.
- D)
filtrar e diminuir sua presença nas redes digitais.
- E)
fiscalizar e controlar essa tecnologia. As questões 39 e 40 referem-se ao texto destacado: About seven years ago, three researchers at the University of Toronto built a system that could analyze thousands of photos and teach itself to recognize everyday objects, like dogs, cars and flowers. The system was so effective that Google bought the tiny start-up these researchers were only just getting off the ground. And soon, their system sparked a technological revolution. Suddenly, machines could "see" in a way that was not possible in the past. This made it easier for a smartphone app to search your personal photos and find the images you were looking for. It accelerated the progress of driverless cars and other robotics. And it improved the accuracy of facial recognition services, for social networks like Facebook and for the country's law enforcement agencies. But soon, researchers noticed that these facial recognition services were less accurate when used with women and people of color. Activists raised concerns over how companies were collecting the huge amounts of data needed to train these kinds of systems. Others worried these systems would eventually lead to mass surveillance or autonomous weapons. Fonte: Matz, Cade. Seeking Ground Rules for A. I. www.nytimes.com, 01/03/2019. Adaptado. Acessado em Agosto/2019.)
Questões relacionadas
- Experts warn that "the substitution of machinery for human labour" may "render the population redundant". They worry tha...
- The main players in the Spanish–Aztec War (1519–21) are well known: Hernán Cortés and Montezuma. Lesser-known, though ...
- If there is any doubt about the persistent power of literature in the face of digital culture, it should be banished by...
- "Of course they're fake videos, everyone can see they're not real. All the same, they really did say those things, didn'...
Comentarios (0)
Sem comentarios ainda. Sistema de comentarios sera ativado em breve.
Comentarios com Turnstile + login obrigatorio em desenvolvimento (questao a8e4f457).