Manga Sridhar Akella, Program Manager (Global Cybersecurity Services and Operations) – Yash Technologies
One advantage though for defenders, as pointed out by Jeffrey is that AI solutions are often based on large volumes of data with accompanying costly expenditures for compute resources and this may be a barrier of entry for certain types of attackers.
Manga Sridhar Akella, Program Manager (Global Cybersecurity Services and Operations) – Yash Technologies however believes that AI brings in substantial risks, including AI related cybercrimes. But he is also of the view that AI can be employed to defend and to attack cyber infrastructure, as well as to increase the attack surface that hackers can target. “The rapid adoption of AI and ML by global enterprises could lead to a rise in the new breed of smarter-attacks. AI initiatives present a range of potential vulnerabilities, including malicious corruption or manipulation of the training data, implementation, and configuration. AI systems are authorized to make deductions and decisions in an automated way without any human intervention. As more machine-learning or AI systems are connected to, there's a severe rise in risks.”
“Also, attackers employ a covert technique invisible to naked eye called perturbation, which can be a misplaced pixel or a white noise pattern that can convince a bot that some object on an image is something else,” adds Manga Sridhar Akella. “Chatbot-related Cybercrimes are just one another way. AI-based chatbots could automate ransomware.”