India is drawing up rules for governing deepfakes, a top minister said on Thursday, a day after Prime Minister Narendra Modi raised concerns over the technology. “We plan to complete drafting the regulations within the next few weeks,” information technology minister Ashwini Vaishnaw told reporters after a meeting with academics, industry groups and social media companies. The call for regulation comes days after a deepfake video of Indian actress Rashmika Mandanna went viral on social media, raising concerns about how easy it is to fabricate videos and change the narrative. Several top actors from across Indian film industries, leaders, and activists urged the government to come up with safeguards against the misuse of the deepfake technology.
Deepfakes are realistic yet fabricated videos created by artificial intelligence (AI) algorithms trained on online footage. These synthetic media are digitally manipulated to replace one person’s likeness convincingly with that of another. They use a form of artificial intelligence called deep learning to make images of fake events.
Deepfake technology can seamlessly stitch anyone in the world into a video or photo they never actually participated in. To make a deepfake video of someone, a creator would first train a neural network on many hours of real video footage of the person to give it a realistic “understanding” of what he or she looks like from many angles and under different lighting. Then they’d combine the trained network with computer-graphics techniques to superimpose a copy of the person onto a different actor.
In his opening remarks at a virtual summit of G20 nations on Wednesday, Modi called on global leaders to jointly work towards regulating AI and raised concerns over the negative impacts of deepfakes on society. The process of drafting regulations would also look at penalties on both the person uploading the content and the social media platform on which it was posted, Vaishnaw added. The move comes as countries across the world race to draw up rules to regulate AI.
President Joe Biden last month signed an executive order requiring developers of AI systems that pose risks to US national security, the economy or public health or safety to share the results of safety tests with the US government before they are released to the public. The United Nations too has created a 39-member advisory body to address issues in the governance of AI, while European lawmakers have prepared a draft set of rules which could be approved by next month.
— Written with inputs from ReutersGet latest Tech and Auto news from Techlusive on our WhatsApp Channel, Facebook, X (Twitter), Instagram and YouTube.
Author Name | Shubham Verma