- New AI deepfake scam video of Charles Hoskinson emerges.
- Hoskinson reiterates warnings that the technology will soon become highly convincing.
- A “chain of evidence” is needed to combat deep fakes.
AI technology is advancing rapidly, bringing capabilities once confined to science fiction into reality. While current generative AI can still produce glitchy imperfections, Cardano founder Charles Hoskinson warned that the technology is progressing quicker than many appreciate. He cautioned that AI-generated scam videos will soon be indistinguishable from reality.
Hoskinson recently sounded the alarm about an AI-generated scam video that featured his image. Although the video was far from perfect, such as showing odd facial expressions and out-of-sync lip movements, Hoskinson warned that we are about two years away from AI deep fakes that will be extremely convincing.
The scam video in question featured an AI-generated Hoskinson promoting a Cardano giveaway. The video instructed viewers to scan an on-screen QR code and follow the instructions on the resultant website. To further convince viewers, the AI-generated Hoskinson stated that the giveaway was a reward to the community that has “stood by us and believed in our vision.”
“We are here because of you, and we want to ensure that you share in our success,” remarked the AI-generated Hoskinson.
This recent AI impersonation scam was not the first time bad actors have exploited Hoskinson’s likeness. In June, Hoskinson warned that scammers had released an AI-generated video of him trying to convince viewers to send ADA for a medical blockchain initiative. The overriding concern remains what can be done to combat the problem.
Combating the Scammers
Combating AI scammers is tricky, particularly as advancements in AI technology are trending toward creating perfectly fabricated videos and audio. However, Hoskinson believes emerging blockchain capabilities could combat this threat. He suggested using distributed ledgers to immutably store a “chain of evidence” from initial media capture to final upload, allowing authentication of unaltered originals.
Under this proposal, Hoskinson noted that signing generative assets onto ledgers at creation time could make them AI-proof through verifiable digital watermarking traced to legitimate devices.
On the Flipside
- Broader ethical implications exist around generative AI and how it could be used to manipulate public opinion.
- As with any new technology, regulations and security practices tend to lag behind malicious uses.
- Society’s general mindset of “seeing is believing” needs to change to combat increasingly sophisticated frauds.
Why This Matters
As Hoskinson warns, AI advancements lower the barriers to perpetrating convincing fraud, enabling bad actors to more effectively manipulate communities. The realism of AI-fabricated media raises deep concerns about misinformation dynamics in the future.
Read about Hoskinson’s previous warning on AI-generated scam videos here:
Cardano Founder Charles Hoskinson Alerts Community of AI Scams
Find out more on the revival of Billy Markus’ “Bells” blockchain here:
Dead Dogecoin Sister Token “Bells” Resurrected