Latest

Coinbase tests ChatGPT for Smart Contract Risk Assessment

Coinbase recently tested the effectiveness of ChatGPT in evaluating the risk levels of various tokens. To conduct the experiment, Coinbase’s employees analyzed 20 smart contracts of different tokens for potential risks and then compared their findings to ChatGPT’s results.

In 12 of the 20 cases, ChatGPT’s risk assessments matched those of the experts, indicating the potential benefits of using AI in this field. However, the remaining eight cases highlighted the limitations of the current technology.

In five instances, ChatGPT underrated the risk levels of high-risk tokens, which could be more dangerous than overestimating their risk levels.

Consequently, Coinbase’s analysts believe that the current version of ChatGPT cannot replace manual testing of smart contracts, as the AI model requires more contextual data to conduct its analysis accurately.

Furthermore, Coinbase’s experts found that ChatGPT focuses too much on the comments in the code of smart contracts rather than the program’s logic, reducing the tool’s accuracy.

While ChatGPT shows potential for faster risk assessments, it remains uncertain to rely solely on the AI model’s findings for asset security assessment processes.

Therefore, Coinbase’s representatives conclude that further engineering could improve the accuracy of ChatGPT. However, it is still not reliable enough to replace manual testing entirely.

Unfortunately, the success of ChatGPT in evaluating risk levels has also attracted scammers’ attention. In a recent case, the Harvest Keeper project developers, who claimed to use AI and neural networks to optimize trading, ran away with over $1 million in user assets, highlighting the importance of proper due diligence and risk management in the cryptocurrency industry.