In the fast-evolving world of generative models, there have been some exciting breakthroughs, especially in models based on Transformers and Diffusion. These models have shown exceptional performance in tasks related to generating images. However, when it comes to video generation, they have encountered a roadblock that we’ve observed mostly in Large Language Models (LLMs), known as the hallucination issue.
As of the current year, 2023, in the context of video generation tasks utilizing the img2img framework, the Vanilla Autoencoder architecture remains the preferred choice. Despite the inherent challenge associated with interpolating the self-reconstruction vectors in contrast to the more intricate highly interpolable Variational Autoencoder vectors, it is noteworthy that once this hurdle is overcome, these vectors exhibit a high degree of stability within the temporal domain. It’s a reminder that sometimes, the simplest solution can be the most effective! Thanks to Geoffrey Hinton et al.
Although autoencoders alone are not typically considered full-fledged generative models like GANs or VAEs, they are used as generative aspects in deepfakes.
The term “deepfake” gained attention in 2017 when an open-source faceswap model appeared on Reddit. Since then, many new tools have been developed based on this architecture.
Today, one of the most widely used tools, DeepFaceLab, has been created by its producer, Ivan Petrov. Despite some controversy surrounding him, he has successfully developed tools that the film industry can utilize. Especially with models that surpass traditional deepfake models, he has come a step closer to the quality VFX companies have been waiting for, such as shadow and light transfer.
However Identity leakage, more consistent shadow and light transfer and huge memory requirements for high resolution are still known challenges.
Today, we are announcing PAGI Gen that overcomes these challenges. We designed our model architecture based on ID conditional autoencoders, thanks to that we kept all the layers shared, including the intermediate layers. This new architecture brings us the following benefits.
1) Outstanding Light and Expression Transfer
In contrast to other face-swap models, all weights in this model, including the intermediate layers, are shared for both the target and source. This enables the model to capture significantly more precise shadow and light details.
2) High Resolution
Reducing the model’s components and parameters enables training at higher resolutions and batch sizes without compromising quality. With an Nvidia RTX 3090 graphics card, the model can process 2 batches at 1024×1024 resolution in just 900 milliseconds.
This makes it possible for indie VFX artists to produce content of Hollywood quality using the GPUs at their disposal. Theoretically, this model has no resolution limitations as long as you have sufficient graphics memory. A minimum of 24 GB VRAM is required to train this model at Full HD resolution.
3) Overcome ID-Leaks
Identity leakage is a common concern in face-swapping tools today.
Given that our model operates as an ID conditional network, it enables the utilization of the ID information multiplier as a hyperparameter during the prediction phase. This hyperparameter can be changed within limits where it does not disrupt expressions and lights.
This new architecture ensures that the final output truly resembles your intended target, not an impersonator.
4) ID Interpolability and Few Shots Train
While this usage is still in the research stage, the images below were created using only 30 frames and with no post-processing.
5) Quick Addition of New Targets
Since there are no different inter or decoder components, the model can be finetuned very quickly for a new target. There is no need to reset any components or weights, ensuring the preservation of learned expressions, lighting and shadows.
6) Auto Blend
Auto Blend does not require a manual post-process to blend faces. This feature is still in the research stage but can be experimented with using advanced options. The following video is just one of experiments with it.
7) Screen Swap
Screen Swap does not require a manual post-process and blending phase. This feature is also in the research stage but can be experimented with using advanced options. The following video is just one of experiments with it.
8) Greater than 8 bit Color Depth Support
Due to the model’s compact structure, we consider it suitable for modification, and we prioritize the importance of this aspect in producing 4k content. However, until we make improvements in this regard, you can convert your 4k datasets to 8-bit color depth. In our tests, we believe the ffmpeg branch that performs this conversion with the least visual loss is jellyfin, and we include this branch in the dataset extraction process within PAGI Gen.
In addition to model features, PAGI Gen encompasses modules such as realtime and voice-swap within an end-to-end generation framework. We will delve into the features of these tools, including Dataset Builder, Curation and Blender, in our upcoming blog posts.
Responsibility
Up until now, generative applications on the cloud have been given the green light, while offline faceswap tools have been condemned, and there’s a clear reluctance to release such tools to the public. In the age of GenAI’s rapid evolution, we, with our extensive experience in cybersecurity, view this approach as similar to allowing “Microsoft Visual Studio to compile exclusively in the cloud.” We consider this stance outdated, and that’s why we’ve decided to make our products accessible to all by taking responsibility;
We can see both the advantages and disadvantages of presenting our product for public usage. However, we do not believe that keeping such tools from being made public or implementing certain filtering measures on the cloud side of GenAI tools can help to prevent the threat of deepfake.
We are aware of our responsibilities in this regard and have taken some measures to prevent the misuse of our software to produce deepfakes and we plan to improve them further.
1) Adult Content Detection
As everyone will agree, the deepfake threat targeted women the most. The privacy of women, who have already been turned into advertising commodities by capitalism, has been violated even more with synthetic images. PAGI Gen does not allow the training of adult content by scanning datasets against such contents.
2) Invisible Watermark
PAGI Gen adds invisible watermarks to the outputs, ensuring they are recognized as machine-generated by third parties.
Contact [email protected] for the detection module.
We hope that soon Google will release to the public their AI based watermark technology, SynthID enabling us to incorporate more robust watermarks into our products. Additionally, we plan to integrate Adobe’s authentication into our products as a whitelisting strategy.
3) Deepware Deepfake Scanner
At Deepware, we have released a basic deepfake scanner.
You can scan suspicious videos for free at: scanner.deepware.ai
Try PAGI Gen Beta
If you’re interested in trying out PAGI Gen, please submit a beta request.
Visit here : sl777
Visit here : 898a
Visit here : GG999
Visit here : KM777
Visit here : vm777
Visit here : SVIP2
Visit here : KM789
Visit here : JKT88
Visit here : EVIP3
Visit here : JKTJKT
Visit here : GOLDRAJA
Visit here : HOKIBONUS
Visit here : sl888
Visit here : evip3
Visit here : g668
Visit here : okjkt
Visit here : rejekibet
Visit here : na777
Visit here : jkt88
Visit here : okjkt
Visit here : rejekibet
Visit here : na777
Visit here : jkt88
Visit here : okjkt
Visit here : rejekibet
Visit here : na777
Visit here : jkt88
Visit here : okjkt
Visit here : rejekibet
Visit here : jayaslot
Visit here : jkt88
Visit here : okjkt
Visit here : sl888
Visit here : km777
Visit here : ink789
Visit here : rp55
Visit here : tt789
You can scan & detect deepfake videos at deepware.ai
Visit here : https://architosh.com/
Visit here : https://larepublicaonline.com/
Visit here : https://amazingfacts.id/
Visit here : https://adfas.org.br/
Visit here : https://atlasgyogykozpont.hu/
Visit here : https://www.genevaparks.org/
Visit here : https://www.far.jfn.ac.lk/
Visit here : https://flinksfast.se/
Visit here : https://konstvandring.nu/
Visit here : https://poultry.uz/
Visit here : https://smartlearning-medicina.it/
Visit here : https://radioba.by/
Visit here : https://audioprotesistas.org/
Visit here : https://leocard.lviv.ua/
Visit here : https://streubraeu.de/
Visit here : https://unisap.ac.id/
Recomended for visit : https://campu.pl/
Recomended for visit : https://www.meiomt.pt/
Recomended for visit : https://amazingfacts.id/
Recomended for visit : https://lvivavtodor.com.ua/
Recomended for visit : https://www.ipac-co2.com/
Recomended for visit : https://kemenagkabsumbabarat.com/
Recomended for visit : https://vec.uaysen.cl/
Recomended for visit : https://ficgenero.uaysen.cl/
Recomended for visit : vm777
Recomended for visit : sl888
Recomended for visit : rejekibet
Recomended for visit : jayaslot
Recomended for visit : na777
Recomended for visit : km777
Recomended for visit : rp55
Recomended for visit : bandarqq
Recomended for visit : dominoqq
Recomended for visit : pkv games
Recomended for visit : rp99
Recomended for visit : yyrr
Recomended for visit : jejuslot
Recomended for visit : 666F
Recomended for visit : km77
Recomended for visit : qqrp
Recomended for visit : jktjkt
Recomended for visit : rejeki hub
Recomended for visit : km77
Recomended for visit : IDRKING
Recomended for visit : sl777
Recomended for visit : sl888
Recomended for visit : jejuslot
Recomended for visit : 99SL
Recomended for visit : 666F
Recomended for visit : rejeki hub
Recomended for visit : 666F
Recomended for visit : RAJASTAR
Recomended for visit : S9S9
The post 10 Most Convincing Deepfake Videos of May first appeared on Deepware - Scan & Detect Deepfake Videos With a Simple tool.]]>So far, politicians and opinion leaders, guided by organized media organizations that mislead the international community, have been noted in human history. However, artificial intelligence product synthetic media and its most destructive type, deepfake, is now a more useful weapon for online political operations of the digital age. Moreover, deepfake cyber-attacks can be easily anonymised, making victims of dirty disinformation wars unsolved deep in history.
Maybe a Saddam deepfake would be enough
About 40 countries supported the US invasion of Iraq on March 20, 2003, with claims of a “nuclear weapons threat.” The Iraq operation, which no one can deny today that it was an irreversible global disinformation operation, is being cursed today by a significant part of people who supported it that day. In order for this operation to be accepted, which caused the country to lose its independence and collapse, terrorist brutality, the death of millions of people and irreparable wounds to the conscience of humanity, many international media organizations, politicians and opinion leaders produced grounds for nuclear danger in those days. However, if it hadn’t been for 15 years with deepfake technology, no one would have had to take this responsibility. A deepfake video showing Saddam Hussein at the nuclear weapons-producing facility would perhaps be more than enough for his glorious rule in Iraq to end in the pit where he was hiding.
Political disinformation demolishes from within inside, blindfolds the world too
Political operations in chaos geographies are supported by provocations that will eliminate the rights of target countries and nations to determine their integrity and their common future. The manipulation of political actors and events deepens the chaos. In online disinformation wars, the deep focus of imperialism and colonialism does not change the result that those who use synthetic media weapons are repressive administrations, enemy intelligence agencies, or terrorist organizations that are mostly suppliers of them. Deepfake technology, thanks to the level of credibility it has achieved, can be convincing both from the point of view of its supporters in the chosen geography of the victim and from the point of view of the international community, albeit temporarily, to deepen uncertainty, ignite the climate of conflict and legitimize fictional reality.
Deepfake weapon was also used in military coup in Myanmar
Because the political turmoil never ended, Myanmar, the Southeast Asian country that often mentioned in the world, was the scene of a new military coup about four months ago. The coup, led by Chief of staff Min Aung Hlaing, also made history with a deepfake attempt aimed at democracy. Fake confession images of Yangon State Premier Phyo Min Thein, who was jailed by coup d’etat soldiers in Myanmar, were produced and served to the press. In a deepfake video broadcast on the military-controlled MRTV channel on March 24, Thein appeared to say that he had handed bribes to Aung San Suu Kyi, the leader of the National League for Democracy (NLD), which held power before democracy ended in the country. But those who carried out the military coup in Myanmar on February 1 failed to beat the AI product Deepware Scanner deepfake detection technology. A Turkish AI initiative Deepware.ai disclosed that the video is a deepfake video that does not reflect reality. Deepfake scanning and detection engine ‘Deepware Scanner’ technology has scanned the video more than 500 times and concluded that it is 75% deepfake. Thus, users around the world who learned that the video was not real were saved from falling into the disinformation trap of the coup plotters.
Closed-circuit point shot deepfake attack on Russian opponents and EU lawmakers
In order to destroy reality with the deepfake weapon, it is not always necessary to produce mass social engineering scenarios. Deepfake cyber-attacks can be carried out closed-circuit and online, sometimes aimed at political targets. A series of deepfake attacks aimed at sabotaging solidarity and cooperation between oppositions under pressure in Russia and the European Union (EU) were also carried out via video conference last April. The Guardian reported that European Union lawmakers had been deceived by the deepfake of Leonid Volkov, who was on the team of Russian opposition leader Alexei Navalny (!) announced that they were conducting negotiations.

Among those duped by Volkov’s deepfake filter in Video conference calls were Rihards Kols, Chairman of the Latvian parliament’s Foreign Affairs Committee, and Tom Tugendhat, chairman of the British Foreign Affairs Committee, as well as MPs from Estonia and Lithuania. “Putin’s Kremlin is so scared of @Navalny that they are holding fake meetings to discredit the Navalny team. They reached out to me today. They’re not going to publish the parts I call Putin a murderer and a thief, so I’m writing here.” Tugendhat, a British parliamentarian, said in a written statement.
Serious threat to international security and stability
Latvian parliamentarian Kols posted on Twitter a photo of Navalny’s ally Leonid Volkov and a screenshot taken from a video call with him. Kols said he was reached by email by a person claiming to be Volkov. In a brief video conference call with Volkov deepfake, Kols said they discussed support for Russian political prisoners and Russia’s annexation of Crimea. Kols said: “it was a pretty bitter lesson. But thanks to this false Volkov, it has become clear that the decay of Truth or the post-truth era has the potential to seriously threaten the security and stability of local and international governments and societies. Deepfake victim Volkov said: “it looks like my real face, but how did they manage to put it in the Zoom search? Welcome to the age of Deepfake.”
FBI warns of Russian, Chinese deepfake
The United States also sees Russia and China, which it cannot share with the world, as its biggest enemies in the field of disinformation wars and deepfakes with synthetic media content. Last March, the US Federal Bureau of Investigation (FBI) warned the US public that Russian and Chinese – made deepfake circulation would increase in the next year and a half. The FBI, in particular, noted the attempts of disinformation outsourced to synthetic media content uncovered by private sector research companies during the tensest presidential election in U.S. history last year.
“Spamouflage Dragon goes America,” a report published in August last year by Graphika, a New York-based social media research and analysis company, suggested that a network of accounts called Dragon was created on multiple social media platforms to keep Trump’s White House Administration for heavily criticizing. The report highlighted the use of fake profile images with AI-generated synthetic media to add authenticity to the campaign, which originated in China. The network, which Graphika described as “Operation Dragon,” had produced deepfake videos almost daily, on topics ranging from the Trump administration’s decision to ban social media company TikTok in the US to the government’s policies on the coronavirus pandemic, according to the report. While Graphika could not determine whether the network was connected to the Chinese government, U.S. intelligence analysis of the period suggested that Beijing was working to lower Trump’s chances of re-election. “The network was active and open to the public,” the report said. Usually, YouTube, Facebook and Twitter, seemingly possessed, or for any other purpose, tailored using Chinese accounts as politically relevant videos released, and then share and comment on posts created by using a fake account set the impression of an organic community. It was also possible that this report could be a fake Trump victimization scenario.
Media literacy versus deepfake, according to FBI
Before the U.S. election in 2020, researchers at the Internet Research Agency determined that a Russian troll farm used deepfake images created by productive contentious networks (GAN) in fake profile accounts allegedly used to manipulate voters in favor of Trump. The Atlantic Council’s Digital Forensic Research Lab, in collaboration with Graphika and Facebook, uncovered AI-generated images used in a pro-Trump campaign.

The FBI has also warned it will investigate deepfakes attributed to foreign malicious actors. During the US election process, the attack, which went down in history as a bloody Congressional raid, was also said to have been caused in part by misinformation spread online by far-right influencers and media organizations. While some of the main providers of disinformation that triggered the deadly uprising are domestic actors, foreign actors from Russia, Iran and China have also been seized on for exploiting the division in the United States and supporting their own interests. The FBI’s warning also pointed to the importance of media literacy to improve people’s skills in detecting deepfakes. In a speech at an event in Washington, D.C., a few days later, FBI Director Chris Wray argued that citizenship education is a national security issue.
Is this how the digital future will shape the artificial universe?..
In the artificial universe of the digital future, it is clear that the danger of nuclear weapons is being replaced by dirty disinformation wars. Synthetic media produced with AI technology and its most convincing weapon, deepfake, can lead to historical misconceptions from the point of view of the international community, leading global stability and unexpected upheavals and irreversible destruction of peace and security. While millions of people are surprised and amused by fake Trump or Putin videos, the deepfake terror that destroys objective reality with political disinformation may be trying to change borders in geography or darken the future of a nation.
Recomended for visit : bandarqq
Recomended for visit : vm777
Recomended for visit : 55rp
Recomended for visit : sl999
Recomended for visit : sl888
Recomended for visit : km777
Recomended for visit : okjkt
Recomended for visit : gg999
Recomended for visit : 55rp
Recomended for visit : bandarqq
Recomended for visit : dominoqq
Recomended for visit : pkv games
Recomended for visit : sl777
Recomended for visit : sl888
Recomended for visit : sl999
Recomended for visit : ink789
Recomended for visit : tt789
Recomended for visit : km789
The post Deepfake Could Lead to Unsolved Global Disinformation Wars first appeared on Deepware - Scan & Detect Deepfake Videos With a Simple tool.]]>We tried to compile the most convincing deepfake videos with their stories. Because when the malicious synthetic media, which is achieving perfection that makes it difficult to distinguish from the truth, gets out of control, they are the best at telling the extent to which it can blind our minds. Here are some worryingly perfect deepfake videos, some of which are likely to be on your online radar too, and perhaps only a fraction of them have been overlooked:
Tom Cruise TikTok deepfake videos
Last February a user who uses the name @deeptomcruise on TikTok posted amazing Tom Cruise videos. In the deepfake videos, Cruise looked both younger and taller. A total of 3 videos were released in this format. Tom Cruise was involved in one of them at a menswear store in Italy, the other at a golf course and the third when he was doing magic with a coin. Cruise was actually talking about deepfake when he was doing magic with a coin, and he was having a crazy laugh. The Deepfake videos made headlines in the world’s media after they were posted on the TikTok application. The account that posted the videos also tripled the number of followers and likes. His followers exceeded 1 million, and his number of likes exceeded 3 million.
After the deepfake controversy reignited with these videos, Belgian effects expert Chris Ume, who explained to the media as the person who produced these deepfake videos, was a source person and he argued that the person resembled to Tom Cruise and he used various digital effects and despite this, videos were not perfect and contained errors. Deepfaker Ume suggested that others could not easily prepare such deepfakes for the reasons he described, so the videos were not a harbinger of a disinformation threat. But, TikTok removed Tom Cruise’s deepfake videos.
Buzzfeed and Jordan Peele co-production, Barack Obama deepfake video
When even not yet a year passes after first appearance of Deepfake atReddit.com, the US actuality-tabloid news site Buzzfeed caused huge repercussions by publishing a deepfake video of former US President Barack Obama. In the deepfake video, released in April 2018, Obama was portrayed in the background by Jordan Peele, an actor, screenwriter, director, and comedian, one of the most versatile figures in the U.S. film industry. “We are entering an age where our enemies can make it look as if anyone at any time is saying something, even if they never say it,” Peele said, in an audio imitation superimposed on the former President’s face.
In the preparation of the Obama deepfake, artificial intelligence and deep learning-based FakeApp software was used to transfer the faces of celebrities to adult video content. It wasn’t hard for Buzzfeed’s video producer, Jared Sosa, to access Obama’s data and train the model. However, for the correct design of the model, the machine had to be trained for 56 hours.
Harrison Ford deepfake video; in reaction to “Han Solo: A Star Wars Story” movie
Han Solo, one of the memorable characters of Holywood sci-fi legend Star Wars, whose first film was released in 1977, was played by Oscar-winning actor Harrison Ford, who is 78 years old today. “Han Solo: A Star Wars Story”, a Western American film about the teenage years of Han Solo, one of the most important characters in Star Wars, made in 2018. For the character Han Solo, played by Harrison Ford in the original series, Alden Ehrenreich took the lead role in this film. The film’s $ 393 million box office revenue, however, was rated as disappointing. Many people felt that the character of Han Solo was not the same as in Star Wars, as it was not Harrison Ford.
“I wish Ford was still young for this character,” comments were made and on his YouTube account Shamook in August 2020, published Harrison Ford’s deepfake in Solo role. Although Harrison Ford deepfake, dressed as the film’s actor Ehrenreich, does not reflect the famous actor’s voice and acting in this role, seeing Ford as a young Solo with a 2-minute video was a great feeling for his fans.
Mark Zuckerberg deepfake videos targeting Facebook
Facebook founder and CEO Mark Zuckerberg has inspired deepfake videos multiple times, especially in response to Facebook’s controversial policies toward synthetic media. During the summer of 2019 artists Bill Posters and Daniel Howe, Sheter used deepfake video of Mark Zuckerberg as a part of conceptual art installation called Spectre, in an exhibition called, “alternative realities” in England. The Video, directed at the Big Data world with Mark Zuckerberg’s deepfake image, laid out his criticism of Information Security. Bill Posters and Daniel Howe posted the Zuckerberg deepfake video, which they co-produced with advertising company Canny, on Instagram in June 2019. The Video was edited as if it were part of a TV newscast organized as CBSN to evoke CBS, and was broadcast with the announcement tape “we increase transparency in advertisements.”
Again, in the “smiling Zuckerberg” deepfake video released in July 2019 where he looked like he was on a TV news live broadcast Facebook CEO’s testimony at the “technology competition” session of the U.S. House Judiciary Committee was satirized. In the Deepfake video, Zuckerberg appeared to connect to the committee meeting via video conference as he was having a barbecue in his garden.
Salvador Dali deepfake at Dali Museum kiosk
In an interview, the surrealist painter Salvador Dalí said, “I believe in death in general, but I absolutely do not believe in Dali’s death.” The Dalí Museum in St. Petersburg Florida made the painter’s prophecy come true with a deepfake video. The Dalí Museum also brought a different breath into the marketing of art with its Salvador Dali deepfake. At the Dali Lives exhibition, Dali, who greets visitors from a kiosk screen when the bell is pressed, now tells them stories.
Using deepfake technology based on machine learning, the video that puts Dalí in front of a real-size audience was produced in collaboration with the advertising agency Goodby, Silverstein & Partners (GS&P). Using interviews and archival footage, GS&P used 1,000 hours of machine learning to train Dalí’s artificial intelligence algorithm. Facial expressions were then imposed on a player with Dalí’s body proportions. Excerpts from letters and interviews, his French, Spanish and English as a voiceover artist that can mimic synchronized.
US President Nixon’s “Moon Disaster” deepfake video
37th US President Richard Nixon who made history from 1969 to 1974, when he was forced to resign during the second term of his presidency had witnessed another historic event during his tenure. As part of his country’s Space Works, humanity has set foot on the moon for the first time. No one could be sure what Apollo 11 astronauts Neil Armstrong and Edwin Buzz Aldrin would encounter. You had to be prepared for every eventuality. For this reason, two alternative texts of speeches have been prepared for the president of the United States. On the lunar surface, when a “Big step for Humanity” was successfully taken, there was no need to talk about disaster.
Nixon delivered the “Moon Disaster” speech prepared by William Safire, one of the speech’s authors, in 1969, exactly 26 years after his death. Nixon’s speech, which he did not need to do while he was alive, was carried out with today’s deepfake technology. “In event of Moon Disaster” was directed by Francesca Panetta and Halsey Burgund of the Massachusetts Institute of Technology (MIT). The production, which was carried out with the ai model produced by Canny AI with the “changing video dialogue” technology developed and with the support of Respeecher’s voice cloning technology, was prepared in more than 6 months. The Film, half a century after its subject, allowed the viewer to experience an alternate history.
Elon Musk’s Zoombombing deepfake
Elon Musk, one of the leading leaders of tech entrepreneurship, also quickly became one of the most popular figures in deepfake videos, as did Mark Zuckerberg. Because of the pandemic process, the habit of meeting through video conferencing applications, especially Zoom, spread rapidly, while attempts to infiltrate meetings by cyber sabotage methods called Zoombombing began to emerge. Here in April 2020, Ali Aliev, the programmer who took on the image of Elon Musk in real time with the deepfake technology called Avatarify, which he developed, caused great surprise and excitement at a Zoom meeting in which he leaked.
Avatarify, of which programmer Ali Aliev is its developer and CEO, was offering ‘deeply fake’ filters to impersonate celebrities. Avatarify, which uses an algorithm to place the target’s face on the person’s face and move the mask’s eyes and mouth synchronously in real time, could be used in video conference calls on Zoom and Skype. Ali Aliev said that when he developed Avatarify, he used open source code for the “first-degree motion model for image animation”, a tracking algorithm first created at the University of Trento in Italy.
Robert De Niro’s “Irishman” deepfake
Netflix has unveiled its ambitious film The Irishman, which it spent $ 175 million on, to viewers ahead of Christmas 2019. The film, a 10-year project of famous director Martin Scorsese and combining living legends of cinema such as Robert De Niro, Al Pacino, Joe Pesci, was released to wipe out all the awards of the world of cinema, especially the Oscars.
A Youtuber named ifake user, who is named as a deepfake artist and has 24 thousand followers, enjoyed the pleasure of both the world-famous Oscar-winning director and Netflix, who invested millions of dollars in his project with the deepfake video he posted. The rejuvenation effect that Ifake applied to Robert De Niro in the Irishman with free deepfake software was found by all image and cinema authorities to be much more successful than Netflix’s expensive CGI effect. Ifake’s video, which features a rejuvenated De Niro scene in the Irishman, was watched by more than half a million people. Moreover, IFake achieved this with free deepfake software and only 1 Week of effort.
Al Pacino as deepfake, De Niro in “Taxi Driver”
Although Robert De Niro and Al Pacino are often compared to each other, they have starred together in only four films to date. The fact that Martin Scorsese, who is also the director of the Irishman, preferred De Niro rather than Al Pacino in the 1976 crime classic, “Taxi Driver,” was not an obstacle in terms of deepfake technology.
In key scenes, in which Travis Bickle, the fictional character of the film portrayed by De Niro, tests himself with a show of weapons and power, Al Pacino also gave a rather impressive performance of deepfake. The Al Pacino Taxi Driver deepfake video, released in September 2019, has had close to half a million views. YouTuber named “Ctrl Shift Face”, was one of the suppliers of AI-based artistic content platform that produced the video Patreon.com.
Deepfake video, an alternate version of the “Matrix”, was released before “Matrix 4”
The Matrix, which is among the Hollywood sci-fi classics with its first film released in 1999, is preparing to appear in front of the audience with its new film at the end of May. But The Matrix deepfake, a co-production of Ctrl Shift Face, which produces the Al Pacino Taxi Driver deepfake, and Chris Ume, maker of the Tom Cruise TikTok deepfakes, managed to excite the audience before Matrix 4.
The deepfake video, a new version of the matrix released on YouTube in February 2020, shows what can happen to one of the film’s main characters, Neo, when he chooses the blue pill. The deepfake video, derived from images of office Space movies with the Matrix, looks admirably realistic. One of the most important scenes in the minds of his first film, It was the moment Anderson decided to become Neo. Another important character in the film, Morpheus, with the red and blue pills he holds in both hands and asks Anderson to make a decision. Mr. Anderson chooses the red pill, choosing to stay in Wonderland and see how deep the rabbit hole goes. In the prepared deepfake video, Mr. Anderson prefers to go back to his ordinary life by choosing the blue pill. Mr. Anderson who chose the blue pill wakes up to replace the Peter Gibbons character in Office Space. Mr. Anderson displays excellent character as an office worker in the plaza world of office space. In the video of Peter Gibbons as Anderson, Agent Smith appears in Anderson’s office wearing black glasses.
In “Indiana Jones”, Chris Pratt is nominated for the role of Harrison Ford in deepfake video
Will Deepfake technology be a glimmer of hope for actors who face losing their legendary roles as they age, or a testament to the fact that young candidates can fill their place? Harrison Ford must be having this dilemma when it comes to his 70s. Because, like his Solo character in StarWars, his lead role in the Indiana Jones Adventure is under the threat of deepfake.
For the lead role of Indiana Jones 5, which is scheduled to be shot this year and released in the summer of next year, the name of American young film and TV star Chris Pratt was already among the nominees. YouTube channel Shamook has virtually eliminated the actor’s role with its new Indiana Jones deepfake video featuring Chris Pratt as Indy, just as Harrison Ford met his teenage character in StarWars with his Solo character. For the Indiana Jones deepfake video, a face profile was created by training the model with 5,000 HD photos of Chris Pratt. Using Deepfake technology and utilizing the actors’ similarities to each other, this profile was passed on to Harrison Ford.
Captain Marvel, “If it was Charlize Theron” deepfake video
Captain Marvel is a 2019 American superhero film based on the comic book character Carol Danvers. Published in many countries by Marvel publications, the comic book was again adapted for cinema under the production of Marvel Studios. At the film’s audition Brie Larson, who was supposed to be at 26th place, eventually got the lead role, there were quite a few people who were more interested in Charlize Theron for this role. It was said that Theron had not accepted the role because he was 15 years older than the character in the film. Here Patreon.com Shamook, one of the producers of deepfake, did not break the memory of Theron fans, and in Captain Marvel deepfake, he impressively proved how befitting his lead role would be.
Jim Carrey deepfake for Jack Nicholson’s “the Sherring” role
With Jack Nicholson’s manic fury, few actors can compete. So it wasn’t easy to imagine anyone else playing the role of Jack Torrance in Stanley Kubrick’s adaptation of “the Shining.” This 1980 horror film was one of Nicholson’s most iconic roles. An actor who was especially famous for his comedy films, perhaps no one would have thought of this role. Yet deepfake technology has stripped even Jack Nicholson of his role. The Film was remade in July 2019 using deepfake technology, with Jim Carrey in the lead role.
Jim Carrey, a veteran actor of comedy films, was so convincingly integrated into Kubrick’s “The Shining” that he appeared with Shelley Duvall in one of the film’s most famous scenes. Jim Carrey was one of the few actors whose facial contours could match Nicholson’s hysteria. That’s why his face looks pretty good on the character of Jack Torrance. The “The Shining” deepfake video, posted by Ctrl Shift face’s YouTube account, which is also the producer of many other famous deepfake videos, went viral with over a million views.
The post Source Of High Appreciation And Concern; The Most Convincing Deepfake Videos In History first appeared on Deepware - Scan & Detect Deepfake Videos With a Simple tool.]]>Deepfake porn sites emerged
Thanks to synthetic media and deepfake technology caused by artificial intelligence (AI), people are unwittingly turned into the object of adult content in the form of videos or photos, often by stealing their faces from social media, and thus they are exploited. In 2017 revenge porn, which is the first deepfake video content published in Reddit.com and projected to be 95% of deepfake content today, is now published on specially prepared deepfake porn sites and goes viral on social media. Ordinary people are also succumbing to their psychological problems or passions, with different bad intentions, and are increasingly joining the amateur producers of adult deepfake content. Experts predict that the number of adult-content deepfake videos circulating online, in which innocent people are chosen as victims, will rise to 180,000 this year and 720,000 next year.

Without taking control of adult content, which has become the human mind’s greatest weakness, fighting the threat of deepfake seems unlikely. As Deepware AI, we are leading a very important development in this area by adding the first online deepfake scanning and detection engine that we have developed the Adult Scanner function to Deepware Scanner.
Who is winning, who is losing?
According to the online porn market 2019-2027 survey of India-based market research organization Absolute Markets Insights, the online global adult content market is estimated at $ 35.17 billion for 2019. Thanks to pandemic and crypto currency content subscription, the market is estimated to have exceeded $ 40 billion last year, generating 15.12% growth. The pornography sector, which dominates such a giant economy, no longer confines itself to produce and sell content aimed at the instinctive satisfaction of humanity. By spreading the pornography epidemic to society, it encourages and drives amateur content production. Those who are not content customers of adult content have the opportunity to produce pornographic deepfake content of their chosen victim with the help of easily accessible mobile applications and user-friendly web interfaces. When they try to harm someone else, they do not take into account that they or a relative of them may be the victims of a similar attack in the future.
For the most part, the lives of abused victims are turned upside down by adding their faces to naked bodies in adult media. When such images are circulated online, it is not much questioned whether the images are real or increasingly loses their importance. The chosen victims are in danger of losing their reputation, dignity, spouses, families, social environment, their jobs, educational and career opportunities, their future and everything valuable they have, at a moment.
Revenge porn, the most dangerous method of blackmail
Deepfake media in the adult content genre is becoming a very dangerous assault weapon, with the quality of harassment and reputation attack directed at the victim. But this attack occurred when the deepfake media in question was circulated online. However, at the stage before entering online circulation, we may never know who in this way succumbed to what dark threats and blackmails directed at them, or what dirty purposes they will not be able to resist in this way in the future. Fake synthetic media with sex content can become a method of blackmail in more conservative societies, the consequences of which are too heavy to ever be afforded.
When women, especially politicians, bureaucrats, diplomats, journalists, writers, activists, opinion leaders, who should treat benefit of society and nation over everything due to their tasks and missions are threatened with deepfake media with adult content to what extent they will be able to resist? How can they resist such threats and continue to defend the interests of society, especially in more conservative geographies such as the Middle East and North Africa?
The price of investigative journalism is fake nude images served to half the country
Rana Ayyub (Eyyub) is a famous Muslim investigative journalist who made her name through his research exposing political scandals such as human rights violations and corruption in India. Young female journalist-writer wrote a revenge porn attack in her home country on the US-based news site the Huffington Post in the 6th month of the history of the world deepfake. Rana Ayyup, told in her article titled “I’m a victim of a deepfake porn incident to silence me” published in “Lives That Are Less Ordinary” section of huffingtonpost.com on 21 November 2018 that she has become the target of misogynistic harassment and hate messages on social media for a long time because she was an investigative journalist. For Rana Ayyub, who always tried to ignore the danger by convincing herself that it was only online hate, everything changed in two days in April 2018.

As polarization and tension dominate across the country over the rape of an 8-year-old Kashmiri girl, Rana Ayyub is invited to TV programs on the BBC and Al Jazeera to talk about how sexual abusers against children are protected in India. The next day, a series of fake tweets began circulating on social media, allegedly posted from her social media account. The fake tweets, which carry messages such as ‘I hate India’ and ‘I love child rapists and I support them if they do it in the name of Islam’, follow a 2.5-minute campaign of revenge porn and online lynching that was put into distribution a day later. Moreover, the video using her own face has been distributed to at least half of mobile phones and social media in the country via WhatsApp, a source from opposition BJP party tells her by email.

The law is not enough to ensure justice against deepfake
Although the harassment campaign, carried out via Twitter, Facebook and Whatsapp, has reached a limit that exceeds human dignity and tolerance, the police and court are allegedly indifferent to applications. Meanwhile, the video in question has been shared tens of thousands of times and hundreds of thousands of abusive comments were made. When the incident takes on an international dimension, and the United Nations rapporteurs step in and warn the Indian government, the legal process comes into play. But because of the great trauma experienced by the young female journalist, it is already too late.
Rana Ayyub attended to the TED Radio Hour program on NPR with Professor Danielle Citron at Boston University, School of Law, who studies in the field of cyber harassment on October 30, 2020. The female journalist he pointed out that an important minister in Modi’s government who was jailed in 2010 over her corruption report was the second most powerful man in India at the time of these events. Ayyub said that she lay dead for 5 days due to the trauma she experienced and could not leave the house for 6 months. Prof. Citron, on the other hand, emphasized that provocative online fake content spreads 10 times faster than other true and true content.
Barrier to pornography, detection to deepfake
“Deepfake pornography” is actually a double-layered social hazard. It is not necessary to explain the damage that pornography, which exploits the human body, with its obscene nature, which encourages the propensity for violence on the psychology of the individual and the moral values of society. The trauma of this abuse on children and young people is even greater. When you add the trauma experienced by deepfake victims who were used to this abasement without their knowledge or consent, without having anything to do with such media content, it becomes clearer how vital a two-stage measure is. Against this most dangerous and harmful type of deepfake, it is necessary both to prevent the uncontrolled availability of media content of this nature and to expose the deceptive qualities of deepfake.
Deepware Adult Scanner ignites technological fight against deepfake pornography
With Deepware Scanner, a Deepware AI-based multi-layer deepfake scanning and detection engine developed by deepware AI’s engineers deepware.ai for the first time in the world, offers online users the ability to view videos for free and report the possibility of deepfake through our website. As Deepware AI, we have also ignited the fight against “deepfake pornography” by adding an adult scanner filter to our Deepware Scanner product for the first time in the world.
The Adult Scanner filter, added to Deepware scanner, detects pornographic video content thanks to advanced AI algorithms. It blocks the display of a video scanned through a Video link or as a file during scanning and reports if it carries pornographic content. In this way, the system protects the scanning user from viewing obscene content in any scene of the scanned video. In addition, due to the report output of scanning in Deepware scanner, whether the video is deepfake or not, pornographic video is also prevented from being delivered to other users through search engines.

Although Deepware Scanner prevents such item from being displayed because it carries pornographic content, it reports whether the video is deepfake as a probability ratio to the scanning user. As a result, a user who performs a deepfake scan of a video in Deepware scanner is protected from viewing pornographic images and being an instrument for distributing the video during scanning and reporting, as well as being informed of the possibility of the video’s authenticity.
Whether pornographic deepfake images with adult content circulate online or victims submit to blackmail, the price to be paid will be too high. What if deepfake media with adult content, which can be easily produced to realize dark intentions, soon gets out of control… what becomes of the world when social lynching attempts based on social engineering, aimed at abuse, harassment and blackmail, eliminate the common mind, justice and conscience? How convincing can it be to talk about fighting deepfake, without preventing pornography abasement, when this picture is in the middle?
The post Weakening The Threat of Pornography May Set Back The Danger of Deepfake first appeared on Deepware - Scan & Detect Deepfake Videos With a Simple tool.]]>What we’re trying to explain by the cocktail analogy is hybrid perception, sweetened by adding digital synthetic reality to real-life physical reality. This is called “Mixed Reality”, reflecting the diluted state of perception of physical reality.
Holographic meeting
Digital synthetic reality, taking advantage of the pandemic, enters our lives with mixed reality solutions. In this way, synthetic ingredients bring us together in our own physical environments, holographically. Crossing physical distances, maintaining life face-to-face as before without risking contact, may be attractive to all of us. But when synthetic reality surrounds life and makes perception dependent, what kind of world will we wake up to? How will we be protected from the digital dangers that knock at the door?
Truth virtualization process …
We have come to the digital transformation of reality in a short time, but in stages gradually. Virtual Reality (VR), consisting entirely of synthetic content, was a computer simulation surrounded by 360-degree video. So it was not successful in replacing physical reality. Virtual reality, accessible through VR apparatuses such as the Oculus Rift, HTC Vive or Google Cardboard, could not dissuade mankind from the five senses that have genetic heritage.
Augmented reality (AR), on the other hand, was a transitional solution to connect a person with digital reality without disconnecting them from the physical world. Superimposing or mounting a digital content over physical reality also provided only a fun effect. A period of Pokemon GO madness has engulfed the world. But digital content could not interact with physical objects in the real world. This, in turn, has made augmented reality no more than a prosaic digital experience.
In order to accept synthetic reality, it was necessary to make digital experiences in life more palatable, without breaking people out of real life. This was only possible by making the perception a little drunk, without disturbing the body, and acclimatizing it to artificial reality in a pleasant way.
Microsoft Mesh aims for a holographic universe
Microsoft had promised its users a mixed reality experience in the previous step with the HoloLens 2 glasses it developed. Microsoft Mesh, a mixed reality platform built on cloud service Azure, recently introduced to users. Microsoft’s new Mixed Reality technology, Microsoft Mesh, allows digital content to interact with the physical environment, unlike augmented reality.
Applications using the Microsoft Mesh platform allow users to work together on a holographic meeting and three-dimensional (3D) digital content wherever they are. HoloLens 2 looks like the separators we started wearing on our heads to protect ourselves from Covid 19 new types of coronavirus during the pandemic. HoloLens 2 allows holographic synthetic reality to integrate with objects in the physical reality we live in. You can work with your colleagues, for example, on a 3D car prototype or an architectural project model. You can make changes to the 3D model with physical touches. Microsoft Mesh aims to cover all digital devices developed to date, from smartphones to PCs to virtual and augmented reality glasses, and offer the same experience.
Future version of Microsoft Teams …
Microsoft Mesh is actually seen as the future of the Microsoft Teams platform. Because it brings Microsoft Teams’ online collaboration capabilities to 3D in a holographic format.
In 2017, Microsoft introduced Microsoft Teams, a remote work and distance learning platform that combines video conferencing, chat, meeting, notes, and plug-ins. Microsoft Teams was designed as competitor to Slack, the forerunner of this field. When such platforms were designed, it was as if today’s pandemic conditions had been predicted. Although they were scattered in remote locations, the platforms allowed teams to work together and share all kinds of content with each other in real time, as if they were in the same office.
From the very beginning, the target was holographic teleportation
I wonder if Microsoft Teams was designed as a basic model of Microsoft Mesh? Brazilian software engineer Alex Kipman, best known as the inventor of HoloLens, Microsoft’s first-generation intelligent virtual reality glasses, is not hiding this situation.
/cdn.vox-cdn.com/uploads/chorus_asset/file/22339281/microsoftmesh.gif?w=925&ssl=1)
During the launch of Microsoft Mesh, Kipman makes a holographic visit to Tom Warren, an editor at The Verge, an important representative of US technology media, at his home. Warren wrote of his holographic experience in his essay: “He showed up in my living room to give me Digital jellyfish and Sharks. It may seem to be a strange dream, but it was a meeting made possible through Microsoft’s new Mesh platform. I put on a HoloLens 2, joined a virtual meeting room, and Kipman immediately appeared next to my coffee table.” Journalist Warren cites the experience as “feeling like a Microsoft Teams meeting planned in the future.” Kipman, who was named inventor of the year in the United States in 2012, replies him, “It was a dream of mixed reality that was the idea from the beginning. You may feel that you are in the same place as someone who shares content. You can teleport from different Mixed Reality devices and be with people even if you’re not physically together.”

Holographic social media coming soon…
While digital transformation so accelerating, the question of what’s next comes to mind. It’s not really that hard to predict. It’s enough to put the puzzle pieces together. Microsoft Holoportation is described as a new 3D capture technology developed by Microsoft that allows high-quality 3D human models to be reconfigured, compressed, and transmitted anywhere. In 2017, when Microsoft acquired the VR social network AltspaceVR, the cloud of fog dissipates a little. Moreover, all these developments intersect in 2017. 2017, when Microsoft Teams also became available, was also the first time that the insidious synthetic media type based on artificial intelligence, called “deepfake,” was encountered.
Microsoft Mesh will perhaps initially make users visible with synthetic avatar images on the AltspaceVR social network. However, The Mesh platform will eventually support the technology that Microsoft calls “holoportation.” So people will grow up with holograms that look like themselves on a virtual social platform. This experience will feel much more immersive than the video calls we now have to make in abundance with social media and video conferencing apps like Facebook, WhatsApp, Zoom, Google Meet or Skype.
Do we move away from humanity along with physical reality?
Since we are not yet subject to intensive digitization today, we manage to be relatively protected from synthetic content. And when we can’t resist the temptation of holographic interaction, our connection to physical reality will gradually begin to weaken.
Digital reflections of Mixed Reality may seem very attractive today. But it’s also increasingly clouding consciousness and perception against hyper-realistic deepfake attacks in the near future. In the face of cyber dangers such as disinformation, phishing, fraud through impersonation, and reputation executioner, it makes online users more vulnerable.
Once synthetic reality enters the blood, it will spread viral to society. Microsoft Mesh is perhaps a new era for a digitized society. We wish that as we move away from natural reality, right and wrong, good and evil do not mix, and humanity does not move away from the original enough to need a Messiah…
The post When Cocktail Reality Intoxicates Perception With A Holographic Touch, Does The Mind Eclipse Occur? first appeared on Deepware - Scan & Detect Deepfake Videos With a Simple tool.]]>Let’s take a look at the possible effects of cybercrime and scenarios where starring deepfake is against financial institutions and the financial system.
Bank bankrupting rumors
To date, deposit banks in some countries have had problems, stemming in part from rumors of financial weakness on social media. Online rumors about the bank’s negative financial structure, although sometimes very exaggerated, often caused real problems in the banking sector. Social media, traditional media and word-of-mouth marketing provide an effective space for spreading such rumors. When public doubts about a bank’s credibility start to become widespread, social media can quickly grow them.

Determining the sources and basis of bank rumors can be difficult. In 2014, the Bulgarian government accused opposition parties of coordinating a “charge” against the reputation of several banks, but did not name them. Events like this, regardless of their source, offer cyber attackers a template they can resort to in the future.
A synthetic social botnet created by the seizure of thousands of devices could also be used to provoke or intensify the rumors that drive bank accusations. Alternatively, a deepfake video posted on social media could depict a bank executive or government official talking about serious liquidity problems. An effective deepfake-focused cyber-attack on banks would probably occur at a time when a country’s financial system is experiencing problems.

Cyber attackers can also use the deepfake weapon, sometimes with real images. By pairing real videos with false or misleading content, they can ensure that the real video being watched leads to false perception. For example, they can project images of queues in front of branches taken in past years or in a different country as crowds trying to withdraw their deposits from the bank.
Scenarios for suddenly locking markets

On April 23, 2013, a state-backed hacking group called the Syrian Electronic Army stole the Associated Press Twitter account, and then tweeted: “major crisis: two explosions at the White House and Barack Obama injured.” This false claim triggered an instant flood of transactions that has been called “the most active two minutes in stock market history.” Automated trading algorithms made up the bulk of the volume. In just three minutes, the S&P 500 lost $ 136 billion in value, and crude oil prices and Treasury bond yields also fell. But the shock ended as quickly as it began. Markets fully recovered after three minutes.

Cyber criminals with political or financial goals can easily try to use deepfakes to cause shock and chaos that will suddenly lock down markets. For example, a synthetic audio or video recording of Saudi and Russian oil ministers negotiating production quotas can be broadcast over the internet to disrupt oil prices and other exchanges, albeit briefly. These can be quickly refuted, but still countries and leaders, who today face insecurities, may find it difficult to dispel suspicions caused by deep falsehoods. If the shock and chaos that will upset the markets and stock markets is overcome by delay, the bill for the resulting damage becomes very heavy.

A convincing deepfake can undoubtedly cause greater and lasting harm than a few minutes of Twitter account hijacking. Deepfake videos in particular benefit from the “picture superiority effect”, a psychological bias towards being more believable and memorable than other types of content. By aiming to spread misinformation organically through social and traditional media, Deepfakes can eliminate the need to hack into an effective news account to publicize a false claim.
After the Twitter attack of the Syrian Electronic Army, it has not experienced a similar disinformation attack that will manipulate markets in the last 7 years. Market players and observers have become more wary of breaking news. Yet another flash deepfake attack is still possible at any moment. Even a short-term collapse can make attackers profit through well-timed market operations and create lasting psychological effects.
Disinformation attack on Central Bank and public financial regulators
Central banks and financial regulators around the world have been forced to tackle cyber-attacks carried out in the form of online rumors to manipulate markets. In 2019, the central banks of India and Myanmar each tried to quell rumors on social media that certain commercial banks would soon be shut down. In 2010, false claims that the head of China’s central bank had defected spread online, and those claims spooked short-term credit markets. In 2000, US stocks were hit for several hours on false rumors that the Central Bank Governor had been in a car accident.

Deepfakes can be used to create fake audio and video recordings of central bank executives privately discussing future interest rate changes, liquidity issues or currency exchange rate policies. For example, a “leaked” audio clip of a fictitious Central Bank meeting could lead to a perception that officials are worried about inflation and are making plans to raise interest rates. Deepfakes can also victimize central bank governors or financial regulators by their individual identities, for political purposes. A manager of a public company can be shown to be taking bribes from a business person, for example, to stop a corruption investigation.
Community trust credit for the victim determines impact of deepfake
Deepfakes will probably have a greater impact in countries with less reliance on financial surveillance mechanisms, less developed democracies and economies. The trust for victim in the community is critical to effectively debunking a deep fraud. In times of financial crisis, deepfakes can exploit and amplify pre-existing economic fears.
Even in large, stable economies, central banks and financial authorities are often criticized for vague, large-scale or slow public communication. A false government response to an unexpected deepfake could extend the time span in which attackers can cause chaos and profit from short-term speculative trades.
Manipulative synthetic public opinion against financial policies

Astroturf is defined as a masking application performed to make it look as if it is supported by baseline participants, hiding the sponsors of a message or organization. A synthetic public support that acts as a mask can be created for a political structure, advertising, or public relations project. This method, which aims to give credibility to statements or organizations by not providing information about the financial connection of sponsors, leads to a perception of “fake” or “artificial” support, rather than a “real” or “natural” audience of supporters.
Regulators in the field, including those overseeing the financial sector, will increasingly have to deal with covert attempts to manipulate policies, leading to a perception of mass support. For example, the U.S. Securities Exchange Commission and the Consumer Financial Protection Bureau, in public comments about the proposed regulations, have encountered large-scale abuse of online systems.
Content generated by artificial intelligence can make synthetic mass support seem more realistic. Synthetic text-generating algorithms can produce any amount of text on any subject. Astroturfs can use this technique to generate thousands or millions of fake comments opposing or supporting a particular financial arrangement. Comments in this form will look much more convincing and difficult to detect than traditional fake campaigns.
Fake synthetic for $ 100
A Harvard University student successfully tested this approach in 2019. Using previous regulatory comments as training data, he synthesized 1001 comments on a proposed real rule. The comments produced were of high quality and expressed various arguments. People wanted to review both synthetic and factual interpretations, unable to distinguish between them. In particular, the research cost less than $ 100 and was carried out using an “older, everyday model HP laptop” by a college senior and, in his words, a “novice encoder.”

Detection tools are being developed to help distinguish text created by artificial intelligence from text written by humans. Algorithms can be trained to detect the difference. But still, the detection algorithms are not flawless, and a dedicated enemy will resort to every means to evade them. For example, synthetic text producers could be designed to produce more irregular, human-like output.
Even the US is vulnerable to synthetic fake public opinion

Synthetic text detection tools can only contribute if used extensively and widely. Even in a country as cradle of technology as the United States, the fact that institutions have not yet implemented much more basic forms of interpretation verification raises concerns. A 2019 U.S. Senate report found that none of the fourteen agencies surveyed used captchas, or indeed any technology, to verify that those who made public comments were real people. In the Harvard experiment, all 1001 synthetic interpretations were successfully forwarded to the agency. Comments generated by artificial intelligence, before being voluntarily withdrawn, accounted for the majority of the 1,810 comments the agency received.
Synthetic fake manipulative mass support operations are seen as a problem and threat that reduces public confidence in the rule-making process, which can lead to significant legal or political negative consequences. Synthetic tools pose great danger as a powerful new method for digital astrourfing.
None of the scenarios seem to represent yet a serious threat to the stability of the global financial system or national markets in mature, healthy economies. Developed economies are often assumed to be resistant to disinformation campaigns, regardless of the technique used. Before the invention of deepfake, there were situations where manipulative attacks, although rarely, could upset markets. For now, synthetic media is perceived as more likely to cause material damage to targeted people and businesses. However, emerging markets face greater threats from synthetic media. Countries with weaker economies and less trusted institutions have to fight fiscal disinformation more; deepfake could exacerbate that problem. But developed countries will also be more vulnerable when there are international financial crises.
The post Deepfake-Backed Financial Attacks May Victimize Underdeveloped Economies With Low Confidence Credit first appeared on Deepware - Scan & Detect Deepfake Videos With a Simple tool.]]>Cryptocurrency and deepfake determine future of cyber fraud
In the last two articles, we have discussed possible financial crime scenarios targeting individuals and companies using artificial intelligence (ai) technology. Until yesterday, cyber financial attacks were largely organized and carried out through the banking or capital markets system. The new economy’s cryptocurrency revolution has opened up new horizons for cyber financial crimes.

“Digital fraud scenarios” that will overshadow Oscar-winning movies are now easier and more profitable than Bitcoin, the most powerful representative of cryptocurrency. Cyber fraudsters have had no trouble finding crowds for the past two years who believe every time Elon Musk distributes bitcoin through Twitter. Deepfake, the star of synthetic media, will undoubtedly play the lead role in future insidious scenarios of this digital fraud series, which is certain to be grossing a lot today. By producing deepfake audio and video, Bitcoin scammers will be able to make any identity they want even more believable, especially Elon Musk. How are tech geeks ready to take the bait, then, to stop themselves from getting caught up in millions more?
Deepfake announces its voice early with voice phishing
Fraudsters who trick phishing victims with people or institutions they have long identified with using the oldest and most common method of cybercrime known, Phishing scenarios, set them up with various traps. The best-known examples are fake bill payment and wire transfer instructions sent from the corporate email account of company CEOs to employees in the company’s finance department. Deepfake offers criminals new scenarios for phishing, perhaps the devil even will not think of.

Thanks to voice synthesis, the use of the voice of the CEO of a company in 2019 and the fake payment instruction transmitted by phone, costing a quarter of a million dollars, was almost like an early warning about voice phishing (vishing). The fact that this incident was recorded on the judicial record as the first case of deepfake cyber fraud has taken concerns one step further. Thanks to this, deepfake made its voice heard in this area about two years ago.
Twitter is the digital scene of Bitcoin fraudsters

Fraudsters with conventional phishing and fishing methods have started using Twitter, one of the most powerful social media platforms, as a movie plateau for their own scripts over the past few years. Attackers hack and hijack verified twitter accounts of famous and important people. Or, without any hassle, they hack into any other verified twitter account and change the account name to the name of the famous or important person they will subject to their scenario. And then they fake tweets from that account and retweet and spread it with other accounts they hacked. So the fake corporate email is replaced by a tweet. They’re angling tech geeks who think they’re awake with fake tweets. Cryptocurrency, on the other hand, allows the digital fraud network to operate faster and easier without being stuck on the security thresholds of the banking system.
They would put $ 1 and get $ 10 from Elon Musk’s Bitcoins
In late 2018, for example, cyber attackers managed to collect and pocket hundreds of thousands of dollars by impersonating world-famous futuristic tech entrepreneur Elon Musk. The attackers, who hacked the account of American publisher Pantheon Books along with many other approved tweeter accounts, changed the account name to Elon Musk. And then, Musk tweeted, “I’m leaving my position at Tesla. Thank you for your support. I decided to give the biggest cryptocurrency gift in history to my followers. I’m distributing 10,000 Bitcoin (BTC) ($64 million) to the entire community. To verify your BTC account, send between 0.1 BTC and 2 BTC to the following address, and get in return between 1 BTC and 20 BTC.”

Fake Elon Musk was promising his followers a 1-to-10 gain. More than 400 people were hooked. There were those who sent 0.5, 0.75 and even 0.99995 BTC (about $ 6,000). Cyber fraudsters collected exactly $ 180,000 this way.
Interesting Elon Musk, ai, deepfake and Bitcoin coincidences
First, let’s note that the incident in 2018 was not the last Bitcoin fraud carried out on Twitter under name of Elon Musk. The similar ones continued until early 2021. Before we mention any of this, it’s worth a little attention to the name Elon Musk. Because interesting coincidences intersect on this name.

Elon Musk, for example, is spearheading a kind of reverse engineering that will improve artificial intelligence steps while tampering with the human brain to unravel its secrets with the Neuralink project. Considered the most dangerous product of artificial intelligence, deepfake was first introduced in 1997 on reddit.com it by an unidentified user called Deepfakes. Elon Musk has been the subject of the first real-time deepfake initiative at Zoom, which has become one of the cornerstones of digital life that has gained momentum with the pandemic. Musk organized those who will join the famous GameStop operation, which challenges financial institutions that are short-selling on the stock market on reddit.com. Bitcoin is the determinant of the cryptocurrency market, which is estimated to reach $ 1 trillion with a market volume of more than $ 700 billion. Tesla, the US electric car maker whose name is used many times in Bitcoin scams to trick victims, is leading the bitcoin market, while Elon Musk is the CEO. Tesla announced last month that it had bought $ 1.5 billion worth of Bitcoin, making BTC on the rise in the pandemic. So every digital curtain somehow comes out of Elon Musk.
Elon Musk also gave advice to the Game of Thrones star

Business and finance website Benzinga.com announced a cryptocurrency fraud initiative using the name Elon Musk again in the middle of November last year. On Twitter, the hacked account, which offered advice by posing as Elon Musk and trying to lure viewers to a website, belonged to Swedish award-winning programmer Daniel Stenberg (@bagder). The account had a verified blue check mark, but the account name had been changed to “Elon Musk.”

In those days, Game of Thrones actress Maisie Williams asked her 2.7 million followers on Twitter for advice on whether to buy Bitcoin. Funny thing is that the famous star got advice from Elon Musk, both real and fake. Although the real Elon Musk may have confused the legendary series starring Williams with The Witcher on Netflix, he did recommended. The fake Musk Twitter account, in response to Williams, shared a fake website and a YouTube link that is no longer active.
If Bitcoin scams continue to occur so frequently, Twitter could lose the trust of its celebrity users. Because tweets sent from a fake Musk account with a link to a website designed to steal cryptocurrencies remained active for two days.
Twitter catches joke, misses scammers

However, Twitter’s rules state that impersonating another person for the purpose of deceiving its users is a violation of its terms of service, which would result in the account being suspended. Even more ironic was that Elon Musk had tweeted warnings from his own account of “Bitcoin scams” throughout 2018. After the event in 2018, Elon Musk joked “Would you like to buy some bitcoin?” in his tweet and Twitter suspended his real account.
Elon Musk name made Bitcoin scammers to gain $ 580 thousand a week, $ 2million in 2 months
In news published on Finance.yahoo.com on 15th January, it is reported that according to security research firm MalwareHunterTeam, the number of victims of cryptocurrency scams on Twitter in the name of Elon Musk is increasing. The method is simple; scammers hack verified Twitter accounts, then change their name to “Elon Musk” and ask people to send cryptocurrencies in exchange for getting a larger amount back. And people believe it every time. According to data compiled by Bleeping Computer and MetaMask, in the second week of the year, fraudsters collected about $587,000 worth of Bitcoin this way.

Twitter has been dealing with the issue of cryptocurrency gift fraud for a long time. Data compiled by cybersecurity firm Adaptiv in June 2020 shows that Bitcoin fraudsters made $ 2 million in profits over a two-month period using Musk’s name.
As technology improves, so do the methods of cyberattacks. Voice synthesis and face swapping are laying the groundwork for new and larger attacks on finance. This year, cyber-attacks are expected to increase in many areas, especially in cryptocurrency markets, and deepfake is expected to strike the most convincing blow in phishing scenarios.
Millions will be blown up with the promise to make money from the air
Danger is at the door, because its client is ready. It is not hard to imagine. From the company account of the CEO of a large company, an email goes to employees. When the link is clicked, the employees look at it, and the company announces that it will distribute Bitcoin to its employees this month. Even in the video posted on the page, the CEO personally describes this gesture to his employees. All employees have to do is send 1 bitcoin to the reported account, for example. Their company has prepared such a gift for them, for example, to adapt to digital life and get acquainted with cryptocurrency. Every 1 BTC sender will return as 2 BTC to employees. What will a vulnerable employee do if they don’t trust their CEO’s word? Moreover, it is a 100% profitable purchase, so it will make money from the air…
It’s only a matter of time before cyber hackers start using deepfake to attack phishing victims. There’s no way they can ignore this big hit opportunity. Also, every time you believe the same tweet and are so eager to collect cryptocurrencies from the air…
The post How Will Those Who Believe Every Time Tweet Of Elon Musk That He Distributes Bitcoin Resist When They See His Deepfake? first appeared on Deepware - Scan & Detect Deepfake Videos With a Simple tool.]]>
Directly comparing synthetic media and its most powerful weapon, deepfake, with more widely used tools allows us to better understand possible threat scenarios that can greatly empower cyber attackers and necessarily require new measures against them. In the same way, it also helps identify scenarios that are no more dangerous than today’s threats and do not require additional precautions.
With widespread financial crimes targeting companies, let’s briefly consider the potential contribution of synthetic media and deepfake to these crimes…
Payment fraud reaches half the damage caused by cybercrime
Tricking firms into launching fraudulent payments led to companies stealing more than $ 1.7 billion from fraudsters in the US in 2019, according to the Federal Bureau of Investigation (FBI). That’s almost half of the total reported loss from all cybercrime. Criminals often hack into an email account of a senior company executive, such as the chief executive (CEO), and then contact a finance officer to make an urgent bank transfer request. Criminals can also disguise themselves as a supplier or employee of the company with fake invoices.

Deepfakes can make phone calls on the subject more realistic in companies where the corporate email confirmation system is implemented. In fact, a convincing deepfake call can even eliminate the need for email hacking or fraud in some cases. Those who have not developed awareness may find deepfake video calls more convincing than voice calls.
The use of deepfakes to commit fraud has already been documented on a small scale. In 2019, criminals cloned the voice of a German CEO, successfully tricking a British company employee into sending a bank transfer of $ 243k. A more ambitious plan could add another layer of persuasion by involving deepfakes in live video interviews. Current technology allows an offender to instantly replace one face with another during a video call. Due to the fact that video calls often have poor image quality, it can make the flaws in the deepfake go unnoticed or overlooked.

Stock manipulation through disinformation campaigns
The Internet offers many ways for disinformation campaigns to manipulate stock prices. Unidentified attackers often make false or misleading claims about a targeted stock through blogs, forums, social media, bot networks, or spam. These campaigns aim to artificially increase (a “pump and unload” plan) or lower (a “Shorten and cash” plan) the stock price in order to generate quick profits. Because small company stocks can be more easily manipulated, small companies has been the most common target of cyber attackers. However, large corporations can also be victims of complex disinformation campaigns, which can sometimes have both political and financial motives.

Deepfakes can lower a company’s stock price by producing seemingly believable false content, perhaps fabricating specific statements by a company leader. For example, a cyber attacker could post a deepfake video in which a targeted CEO admits to his company’s bankruptcy, admits to misconduct, or makes highly offensive comments. Alternatively, deepfakes can also be designed to raise a company’s stock price by making up positive events. For example, the company’s deepfake videos could be produced that depict celebrities supporting or using a product in a fake form.
A well-crafted deepfake shared through social media or spam networks can be effective in manipulating small-volume stocks. Smaller companies often lack the resources and knowledge to build a quick, persuasive self-defense against short and distorted plans. Even if a deepfake is quickly debunked, perpetrators can still profit from short-term trades.
Although Deepfake doesn’t make it believe, its trail remains
Deepfakes may also represent a new vulnerability for large companies whose stock prices have traditionally been more resistant to manipulation. Highly visible company leaders produce large volumes of media interviews, earnings calls, and other public records. These make it easier for attackers to produce deepfakes.

A particularly damaging scenario would require the production of specific statements. For example, a synthesized record of a CEO allegedly using sexist language is required. It may be impossible to prove definitively that a private conversation never took place; a CEO, especially with prior credibility issues, faces a much more challenging situation. It could start to mix with fact fiction and affect the market even more.
Even if a credible deepfake is refuted, it will likely have long-term negative consequences for a company’s reputation. As with other forms of misinformation, deepfake information can leave lasting psychological impressions on some viewers even after being refuted. Experiments have shown that a significant minority will believe that a deep fake is real, despite clear warnings that it is fake. A long-term loss of confidence can drive down revenue and stock prices over time, especially for companies facing the consumer.
Stock manipulation of bots with artificial public opinion on social media
Stock prices can also be manipulated by creating false reflections of mass sentiment. For example, showing a fake social reaction to a brand on social media can lead to a magnetic pull, which can lead to the spread of unreal negative opinion and become reality over time. Social media bots are already used for this purpose. Attackers create a large number of fake identities on a platform and then coordinate mass exchanges that promote or defame specific companies. Although social media platforms use many methodologies, including machine learning, to identify and remove attempts that violate their policies against spam and fake profiles, this is extremely difficult and laborious.

While no cases have yet been publicly documented in this field, deep learning can theoretically be used to create artificial intelligence-driven synthetic social bot networks that better evade detection and improve persuasion. The attackers have already started using AI-generated profile photos that depict people who don’t exist, leading to a situation that hinders efforts to determine the re-use of the picture. It is known that several complex influence campaigns conducted by some media companies with dark intentions and suspicious intelligence elements have used this technique. The next step could be for algorithms to write artificial posts.
Synthetic bots are smarter and more believable than traditional bots

Traditional bots create duplicate or random posts, while synthetic social bots have the ability to post new, personalized content. Aware of their previous posts, they are able to maintain consistent personalities, writing styles, subject interests and biographies over time. The most convincing synthetic bots will gain organic human follow-up, increasing the impact of their messages and making them difficult to detect.
Each bot uses unique language and storytelling consistent with its personality. It may seem to represent campaign-wide consumer sentiment, and therefore may affect the stock price. For example, bots can all claim to have contracted food-borne diseases at the same fast food chain.
Danger grows as stock exchange digitizes
Synthetic social bots for stock manipulators can abuse investors’ desire to analyze social media activity for trends in consumer sentiment. A growing number of fintech companies are marketing “social sensitivity” tools that analyze what social media users say about companies. In some cases, social sensitivity data was integrated with automated trading algorithms, enabling computers to conduct stock trades without human intervention based on apparent trends in social media activity. The more widespread use of social sensitivity analysis and its greater integration with automated trading algorithms will increase the power of synthetic social bots to manipulate stock prices.

It is important to note that all predictable scenarios can be implemented at an unexpected moment, although at this point it has largely not yet been realized. Because, although alternative technologies are also available for today’s financial crimes, they are mostly registered or commercially controlled. This, in turn, provides an opportunity to control their proliferation and restrict future abuse. Synthetic media technologies projected in scenarios targeting companies are widely, easily and even mostly available for free.
Financial institutions and the financial system, how synthetic media and deepfake are under threat, are the subject of the next post. While an entire financial system is still so vulnerable to a massive deepfake attack, the danger of synthetic media facing companies such as individuals is not at all human.
The post Deeepfake Adds Strength And Credibility To Billion-Dollar Financial Frauds Targeting Companies first appeared on Deepware - Scan & Detect Deepfake Videos With a Simple tool.]]>Test of technology against crime
Before the pandemic accelerated digital transformation, financial crimes had already begun to explore vulnerabilities in technology or methods of using technology for criminal purposes. The evolution of financial crimes to IT crimes is actually due to the fact that the opportunities provided by technology facilitate material damage. In the past two years, the first publicly documented cases of deepfake used for fraud and extortion have begun to appear.
Disinformation is not a new form of attack for the financial world. Crimes of deception, such as fraud, forgery and market manipulation, come across as threats that benefit each economy from its own conditions. Attackers, on the other hand, often include new technologies in their plans. So we can’t ignore how new and effective deception tools like deepfake will make financial crimes and attacks that lead to financial harm more dangerous.
Security starts with crisis scenarios of attacks
In order to come up with an accurate analysis, it is necessary to identify specific ways in which deepfakes and other synthetic media can facilitate financial damage, as well as assess their possible impact. Deceptive synthetic media can be used to inflict financial damage on a wide range of potential targets. Open targets include financial institutions such as banks, exchanges, clearing houses and brokerage firms. All of them rely on the right information to make transactions. In addition, financial regulators and central banks that control general market conditions and fight harmful misinformation form another category. But companies and individuals outside the financial sector and regulatory agencies will also become targets of deepfake attacks. Therefore, by looking at the history of financial crimes and today’s synthetic media technology, different threat scenarios can be predicted for four possible target groups: individuals, companies, financial institutions and market regulators.

The individual is targeted, but the economy is damaged
Threat scenarios facing four existing groups of potential victims can target a specific group, and some scenarios can eventually affect more than one group. Against this background, the fact that synthetic media has enabled identity theft does not only harm people whose identities have been stolen. Companies could also be badly damaged by these attacks. For example, banks that issue fake credit cards to perpetrators, and retailers that unwittingly process sales charged on those cards, also suffer. In other words, small-scale damages, when combined simultaneously, can theoretically lead to higher levels of losses by snowball effect.
How does Deepfake add danger to “identity theft?”
The threat of “identity theft,” which also existed before synthetic media, is the most common type of consumer complaint received by the U.S. Federal Trade Commission (FTC). Artificial intelligence (ai), on the other hand, allows new and more complex forms of digital impersonation today. No one can guarantee that the first major financial crime we will encounter in the form of a deepfake attack will not use the video and audio of people in financially important positions containing fake AI derivatives. When Deepfake is used to steal individuals’ identities, for example, a phone call with the victim’s synthesized voice can trick an executive assistant or financial adviser into initiating a fraudulent bank transfer. Deepfake audio or video can also be used to create bank accounts under false identities and facilitate money laundering.

Deepfakes can also facilitate identity theft on a larger scale. Criminals can use deepfake in social engineering operations to gain unauthorized access to large databases of personal information. For example, an e-commerce company official may receive a deep fake phone call that synthesizes the voice of an IT administrator and asks for his username and password. In this scenario, deepfake is the first phishing attribute for the credentials to be obtained. Thanks to the access authority obtained in this way, real identity theft occurs on a larger scale in the second stage.
As voice cloning evolves, deepfake’s role in identity theft will increase
Audio phishing forms performed using Deepfake are technically viable today. (See Figure 1). Current technology enables realistic audio cloning, which can be controlled in real time with keyboard inputs. One of the leading developers of commercial voice synthesis technology claims its technology can convincingly clone a person’s voice based on just five minutes of original recorded speech, while algorithms that can produce raw cloned sounds with three seconds of sample sound are also known to have been developed.

Only small amounts of sample voice requirement means that many people can theoretically have voice cloned and used for identity theft or other malicious purposes. Moreover, identity thieves can clone the victim’s voice from a video on social media. It can call the victim by phone or online voice communication apps, or secretly record his voice in conversations with others.
Banks seek defense against deepfake in collaboration with Fintech firms
According to a University College London report published last year, fake audio and video content now ranks first among the 20 methods of use of AI for criminal purposes, in terms of the harm it can cause, the potential for profit it can provide, and the criteria for ease of use and production. Also, the Covid-19 pandemic makes people more vulnerable to impersonation scams. Because of the quarantine, because face-to-face contact opportunities are restricted, employees are subjected to more deepfake attacks related to fraudulent payment confirmation.
In the Fintech industry, security firms that use ai to combat fraud prevention technologies, both video and audio deepfakes, stand out. Many cybersecurity developers offer resilience detection technology to their clients in the financial sector to detect artificial representations of real customers. Vitality detection technologies play an important role in detecting identity fraud during new customer engagement.

Financial institutions, especially banks, are making new collaborations with fintech companies in the face of growing danger. According to the Financial Times (FT), at the beginning of September, the UK-based multinational investment bank and financial services conglomerate HSBC also joined users of the biometric identification system developed by Mitek technology and offered in partnership with Adobe. HSBC has also integrated into its US retail banking operation by joining the system, which allows it to check the identities of new customers using live images and electronic signatures. Those using Mitek’s biometric system include Chase, ABN Amro, Caixa Bank, Mastercard and Anna Money. British fintech company iProov has opened a new Security Centre in Singapore. The center aims to detect and block deepfake videos used to impersonate customers. Rabobank, ING and Aegon are among the organizations that use this technology to make sure they are dealing with real people.
Bank customers are also aware of the dangers. In the survey of 2 thousand consumers in the US and UK for IProov, 85% of respondents say that deepfakes will make it difficult to trust what they see online, and three-quarters say that it will make authentication more important.
Disinformation is at the heart of fraud
Fraud was the second most common complaint received by the FTC, ahead of synthetic media. Attackers can impersonate a “public official, an endangered relative, a well-known business or technical support professional” to force the victim to pay money. Property damage caused by copycat fraudsters in the US in 2019 was set at $ 667 million.

Deepfakes can increase the realism and credibility of fraud. Scammers can copy the voice of a particular person, such as a victim’s relative or a prominent government official known to many victims. Deepfakes can turn into a huge opportunity for skilled scammers who do extensive online research to map family relationships and develop convincing voice imitations. In fact, scams don’t have to be completely convincing. Manipulating victims’ feelings and creating a false sense of urgency helps to clear up gaps and inconsistencies. That’s why the elderly can be widely chosen as victims.
Pandemic creates favorable climate for financial crimes
In the second half of last year, smartphone maker BlackBerry’s software group warned that the pandemic was exposing more people to impersonation scams. Eric Milam, BlackBerry’s vice president of research operations, said criminals record real customer voices and synthesize new voices to try phone banking scams.

Fintech Company has also announced that “phishing” scams used in Silent Eight to obtain personal data can be made entirely convincing with fake audio and video. More recently, phishing attempts are estimated to have a 60 to 70 percent success rate. It is noted that using ai to personalize the message can increase the success rate to 100 percent, thanks to names or references that only friends or family will know. Matthew Leaney, Chief Financial Officer of Silent Eight, said: “if an elderly person sees in a video that appears to be coming from his granddaughter that his granddaughter is addressing him as usual, that moment is enough for him to believe. If you grew up believing in what you saw, you just trust him. The societal impact is dire.”
Cyber extortion feeds off blackmail
Cyber extortion is the crime of blackmail, which dates back to very old times, has turned into an IT crime. In a cyber extortion scheme, criminals claim to have embarrassing information about the victim and threaten to release that information unless they are paid or their requests of any kind are met. Information, by its very nature, is often of a sexual nature. For example, a method of blackmail is used in which nude pictures or videos of the victim are seized.
In some cases, blackmail material is real and obtained through computer hacking. But more often, the plan is a bluff. To make the plan more personalized and convincing, cyber extortionists sometimes refer to a victim’s password or phone number in their communications. This information is typically taken from a publicly available data dump. In 2019, U.S. residents reported $ 107 million in losses, excluding losses from cyber extortion and ransomware, according to the Federal Bureau of Investigation (FBI).
Fintech agenda of 2021

Aarti Samani, senior vice president of Products and Marketing at iProov, assesses the financial sector’s prospects and fintech agenda in line with the digitization process in 2021:
Banking regulators in different regions, including Europe and the Far East, will allow the use of automated biometrics instead of video calling for remote customer recognition (KYC) processes.
Just like the audio fraud carried out by copying the voice of a high-profile CEO in 2019, there will be several financial crimes and money laundering scandals stemming from the use of deepfake in video calls by the end of 2021.
Concrete steps by several countries, including the United States, to create state-sponsored digital identities could be on the agenda. This can provide important results for financial and government agencies to reduce risks such as impersonation of bank customers and fraud in government support programs through effective authentication.
For those experiencing digital inexperience, simpler authentication methods will be required. Accordingly, in 2021, these three developments will occur: first, the password, which has long been a shortage of many people’s online interactions, will turn into a simpler method of authentication.
Second, as many as 100 million people over the age of 70 around the world will have digital identities, and the concept of “digital surrogacy” will soon become a reality. Third, since older or less experienced people who use technology for the first time are also the most vulnerable to online manipulation, developing online protection methods for them will create an important agenda.
The deepfakes arms race will intensify in 2021. In 2021, we can expect an explosion in the quality and quantity of deepfake use. In addition to entertainment and satirical purposes, we will also see that these are used for disinformation and trolling of deep fraud. Hordes of ‘fake people’ who seem real will share disinformation on an enormous scale, thereby making society believe that thousands of people hold a controversial view.
Creating a very high-quality, sophisticated deepfake will become increasingly easy. A very complex process that was once only really possible at Hollywood film studios is now evolving into something that every teenager can masterfully practice at home.

A balance between panic and security is essential
Fintech company iProov, which specializes in authentication, has released a report on the results of its survey of 105 cybersecurity decision makers at UK-based financial institutions. 13% of firms surveyed had never heard the term deepfake. 31% had no plans to fight in this area or were unsure. 28% had implemented measures. 4% of respondents said deepfake posed no threat to their company, while 40% said it posed a “slight threat.”
Today, synthetic media attacks are still far behind the huge financial threat potential it has. So the strategic question is how much this threat will grow over time. Proponents of the common view believe that action is already needed to prevent serious risks.
However, those who represent responsible circles in the financial sphere and try to calm possible anxiety and panic in the financial sphere in advance argue that the threat will not cause great harm.
Of course, time will tell who is right, but if those warning are right, it will be too late.
The post Deepfake-Backed Financial Crimes Spread More Easily During Pandemic first appeared on Deepware - Scan & Detect Deepfake Videos With a Simple tool.]]>