AI (Artificial Intelligence) has significantly revolutionized software engineering with several advanced AI tools like ChatGPT and GitHub Copilot, which help boost developers’ efficiency.
Besides this, two types of AI-powered coding assistant tools emerged in recent times, and here we have mentioned them:-
CODE COMPLETION Tool
CODE GENERATION Tool
Cybersecurity researchers Sanghak Oh, Kiho Lee, Seonhye Park, Doowon Kim, Hyoungshick Kim from the following universities recently identified that poisoned AI coding assistant tools open the application to hack attack:-
Department of Electrical and Computer Engineering, Sungkyunkwan University, Republic of Korea
Department of Electrical Engineering and Computer Science, University of Tennessee, USA
Poisoned AI Coding Assistant
AI coding assistants are transforming software engineering, but they are vulnerable to poisoning attacks. Attackers inject malicious code snippets into training data, leading to insecure suggestions.
This poses real-world risks, as researchers’ study with 238 participants and 30 professional developers reveals. The survey shows widespread tool adoption, but developers may underestimate poisoning risks.
In-lab studies confirm that poisoned tools can influence developers to include insecure code, highlighting the urgency for education and enhanced coding practices in the AI-powered coding landscape.
Code and model poisoning attacks (Source – Arxiv) Attackers aim to deceive developers through generic backdoor poisoning attacks on code-suggestion deep learning models. This method manipulates models to suggest malicious code without degrading overall performance and is hard to detect.
Attackers leverage access to the model or its dataset, often sourced from open repositories like GitHub, and here, the detection is challenging due to model complexity.
Mitigation strategies include:-
Improved code review
Secure coding practices
Fuzzing
Static analysis tools can help detect poisoned samples, but attackers may craft stealthy versions. After the tasks, participants had an exit interview with two sections:-
1. Demographic and security knowledge assessment, including a quiz and confidence ratings.
2. Follow-up questions explored intentions, rationale, and awareness of vulnerabilities and security threats, such as poisoning attacks in AI-powered coding assistants.
Recommendations
Here below we have mentioned all the recommendations:-
Developer’s Perspective.
Software Companies’ Perspective.
Security Researchers’ Perspective.
User Studies with AI-Powered Coding Tools.
The post Poisoned AI Coding, Assistant Tools Opens Application to Hack Attack appeared first on GBHackers on Security | #1 Globally Trusted Cyber Security News Platform .
Top News
-
U.K. Hacker Linked to Notorious Scattered Spider Group Arrested in Spain
Law enforcement authorities have allegedly arrested a key member of the notorious cybercrime group called Scattered Spider. The individual, a...
-
Lừa đảo mạo danh ‘nở rộ’ trên không gian mạng và ngày càng tinh vi
Dù hình thức không mới song lừa đảo mạo danh hiện vẫn đang khiến nhiều người dân tại Việt Nam và trên thế giới sập bẫy, bị chiếm đoạt tài sản.
-
Lừa đảo đánh cắp mã OTP tinh vi, tấn công mạng tận dụng lỗ hổng mới
Xuất hiện lừa đảo đánh cắp mã OTP tinh vi; Hacker gia tăng tốc độ tận dụng các lỗ hổng mới,... là những thông tin công nghệ trong nước nổi bật...
-
Sleepy Pickle Exploit Let Attackers Exploit ML Models And Attack End-Users
Hackers are targeting, attacking, and exploiting ML models. They want to hack into these systems to steal sensitive data, interrupt services, or...
-
Sleepy Pickle - Kỹ thuật tấn công mới nhắm vào các mô hình học máy
Sleepy Pickle là một kỹ thuật tấn công mới lạ và bí mật nhắm vào chính mô hình ML (Machine Learning) thay vì hệ thống cơ bản. {...