
Rising to a New Challenge
AI presents entirely new challenges to cybersecurity. New research from the Department of Electrical and Computer Engineering could help protect AI vision systems from hacks that distort what they see.
October 9, 2025 Staff
To successfully defend against an attack, you first have to know your weaknesses.
That’s why cybersecurity teams worldwide often employ so-called “white hats” — hackers hired to try their hardest to break through a system’s defenses.
When it comes to AI, though, the cybersecurity field has some catching up to do. Researchers at NC State University helped illustrate that with findings published in December 2023 that showed AI tools were more vulnerable to targeted attacks than previously thought.
Known as “adversarial attacks,” bad actors can manipulate the data being fed into an AI system in order to confuse it — and effectively force it to make bad decisions.
Given how commonly AI computer vision systems are used these days, namely in autonomous vehicles and medical imaging, it’s no exaggeration to say the consequences could be life-or-death. Maybe a self-driving car thinks a red light is green or fails to recognize a stop sign. Or perhaps an altered X-ray image leads to a misdiagnosis.
“That means it is very important for these AI systems to be secure. Identifying vulnerabilities is an important step in making these systems secure, since you must identify a vulnerability in order to defend against it,” says Tianfu Wu, an associate professor of electrical and computer engineering at NC State.
So, together with two graduate students, Wu set out to demonstrate a new way of attacking AI computer vision systems. In a research paper presented at this year’s International Conference on Machine Learning, Wu and Thomas Paniagua, a recent NC State Ph.D. grad, and Chinmay Savadikar, a current Ph.D. student, showed that their new technique could manipulate all of the most widely used AI computer vision systems.
Wu and Paniagua were co-corresponding authors of the paper, titled “Adversarial Perturbations Are Formed by Iteratively Learning Linear Combinations of the Right Singular Vectors of the Adversarial Jacobian,” which Savadikar co-authored.



The researchers tested the technique against the four most common AI vision systems on the market — ResNet-50, DenseNet-121, ViTB and DEiT-B — and proved it was effective at manipulating all four.
Wu and his team dubbed their technique RisingAttacK, a series of operations that can allow users to manipulate what the AI vision system “sees” by making the fewest changes to an image necessary.
After identifying all the visual features in an image, the program runs an operation to determine which features are most important to achieve the attack’s goal.
“For example, if the goal of the attack is to stop the AI from identifying a car, what features in the image are most important for the AI to be able to identify a car in the image?” Wu explains. “The end result is that two images may look identical to human eyes, and we might clearly see a car in both images. But due to RisingAttacK, the AI would see a car in the first image but would not see a car in the second image.”
So what’s next now that Wu and his team have exposed exactly how AI vision systems can be attacked?
“Moving forward, the goal is to develop techniques that can successfully defend against such attacks,” Wu says.
The research team has made RisingAttacK publicly available so that the research community can use it to test neural networks for vulnerabilities: https://github.com/ivmcl/ordered-topk-attack.
This article is based on a news release from NC State University.