News Science & Technology

Hard code | Who has the most to lose with deepfakes?

[ad_1]

Late last year, one of India’s biggest social media influencers — Prime Minister Narendra Modi — took up the matter of deepfakes or artificial audio, video or image that is generated by rapidly improving artificial intelligence technologies making them hard to discern from reality. In November, a video emerged in which we saw someone looking like the PM dance the garba — it was later revealed to be a video of a man who is Modi’s lookalike. The same month, a deepfake video of actor Rashmika Mandanna emerged, her face superimposed on a video of a British influencer walking into an elevator, and that is when the matter blew up, prompting the PM to talk about the technology. Earlier this month, one of the most well-known Indians, cricket icon Sachin Tendulkar, announced he had been the target of a deepfake advertisement that showed him endorsing a gaming website he had not backed.

Prime Minister Narendra Modi has expressed concern over misuse of technology and artificial intelligence (AI) to create deepfakes. (File) PREMIUM
Prime Minister Narendra Modi has expressed concern over misuse of technology and artificial intelligence (AI) to create deepfakes. (File)

The government took notice, not in the least because the PM called for action. It now plans to specifically define deepfake as outlawed content, and build in new obligations for social media companies to stop it from being featured on their products or services. The scope of this measure is still in planning.

In the three examples above lie important distinctions that capture how the threat from deepfakes and the ability to mitigate their harms, are different for different people.

Fame brings vulnerability, but also the ability to refute

For an individual, the greatest harm deepfakes pose is reputational. Modi and Tendulkar demonstrated that they have a prominent enough voice to reach perhaps a wider audience with their rebuttal than the deepfake (or in Modi’s case, a lookalike’s video) targeting them.

Easily the greatest reputational risk deepfakes present is artificial sexual imagery targeting a real person. Even here, a person who commands public attention, say a Bollywood actor or a widely known singer, will be able to dismiss as fake a deepfake video of theirs as fake (to be sure, being targeted with such an attack is still traumatic for it very nature – refuting its authenticity is often of little help).

Conversely, an individual with limited public reach will be less able to stem the spread of deepfakes that harm them. In other words, while both a Bollywood actor and a student may be equally traumatised by a deepfake porn video of theirs, the former will be far more able to refute its existence.

Thus, marginalised and discriminated sections of people — women, people of colour, LGBTQ communities, activists — face a more formidable challenge if deepfakes are weaponised to target them, especially by those who may have more narrative-setting influence.

Familiarity breeds defence

The lack of familiarity with an emergent technology opens new scope for actors like scammers. A side effect of India’s digital payments revolution has been the emergence of mobile payments as a vector for scamming. Victims are usually those not familiar with what hygienic internet and digital banking practices are.

In 2016 and 2017, a sudden penetration of smartphones and cheap mobile internet connectivity coincided with many villages in India witnessing episodes of lynch mobs targeting outsiders. Research has since shown that many were exposed to social media misinformation.

Deepfake technology adds a new dimension to this, fundamentally upending the notion of seeing (or hearing) as believing.

Uncoupling the seeing-is-believing method of trusting information comes easier to newer generations, of teens and young adults than it does their parents, who could never have imagined the sort of hyper-realistic images and videos that are created today by tools like Dall-E and Midjourney – and it is the latter who will remain at highest risk from deepfakes.

In 2019, in one high-profile case, cybercriminals used “deepfake phishing” to deceive the CEO of a UK energy company into transferring $243,000 into their account. Such attacks could become ubiquitous — imagine your parent receiving a phone call in your distressed voice, urgently seeking a large sum of money.

AI learns from what is out there

The chances of a convincing deepfake rise with the amount of images, audio and videos that can be fed to train an AI technology. This poses a particular risk to people putting more of their lives out on the internet – whether for work or for social interactions.

For instance, a podcaster’s voice can become the blueprint from which a deepfake audio can be produced, or a fashion model’s photo shoots can be fed to a programme specialised in creating deepfake sexual imagery.

Those in mass media — films, television, news — are naturally among the most vulnerable, but so are people who often feature themselves on social media, especially if they have a large following or open-access profile.

A perspective on harm

Anyone can suffer damage to image, reputation, blackmail, intimidation, identity theft, bullying or revenge porn due to deepfakes. But some will be affected more than others. It is the ones most vulnerable who need to be kept at the core of legislative, technological and administrative responses to deepfakes.

These could guide, for instance, the burden of proof obligations, as is the case in sexual violence, and due diligence requirements, which large technology platforms will be in a better position to fulfil.

In Hard Code, Binayak will look at some of the emerging challenges from technology and what society, laws and technology itself can do about it.

[ad_2]

Source link


Warning: mysqli_query(): (HY000/1): Can't create/write to file '/tmp/MY5XvqXr' (Errcode: 30 - Read-only file system) in /home/b2bchief/public_html/wp-includes/class-wpdb.php on line 2349

Fatal error: Uncaught wfWAFStorageFileException: Unable to save temporary file for atomic writing. in /home/b2bchief/public_html/wp-content/plugins/wordfence/vendor/wordfence/wf-waf/src/lib/storage/file.php:34 Stack trace: #0 /home/b2bchief/public_html/wp-content/plugins/wordfence/vendor/wordfence/wf-waf/src/lib/storage/file.php(658): wfWAFStorageFile::atomicFilePutContents('/home/b2bchief/...', '<?php exit('Acc...') #1 [internal function]: wfWAFStorageFile->saveConfig('livewaf') #2 {main} thrown in /home/b2bchief/public_html/wp-content/plugins/wordfence/vendor/wordfence/wf-waf/src/lib/storage/file.php on line 34