Amanda NorvelleThursday, October 17, 2024Print this page.
Trust is vital to the widespread acceptance of AI across industries, but one barrier to increasing that trust is that the algorithms powering AI are vulnerable to attacks. And people know it.
David Woodruff, a professor in Carnegie Mellon University's Computer Science Department, and Samson Zhou, an assistant professor in the Department of Computer Science and Engineering at Texas A&M University, hope to change that. The pair received a National Science Foundation grant to strengthen scalable algorithms designed to handle and analyze large volumes of data.
"It's definitely a long-term goal to give people an algorithm that comes with a guarantee behind it," Woodruff said. "We'd like to be able to say, 'We promise you that this algorithm is robust against adversaries,' meaning that no matter how many queries you make to this algorithm it's still going to give you the correct answer."
In creating an algorithm that will be robust against attack, Zhou and Woodruff plan to develop new connections between mathematics and theoretical computer science. They will also look to the field of cryptography (data encryption) for ideas. Through their research, they hope to understand how to strengthen algorithms against attack while maintaining efficiency.
The pair know it will be difficult to prove both that an algorithm is robust against infinite types of attack and that the algorithm will reliably give an accurate answer.
"Sometimes it's not possible to design algorithms to guarantee adversarial robustness," Zhou said. "Sometimes there is no way to promote adversarial robustness if you don't have enough space. In that case, we should stop trying to design algorithms that meet these guarantees and instead look for other ways around these problems."
Learn more about their work in the full story on the Texas A&M website.
Aaron Aupperlee | 412-268-9068 | aaupperlee@cmu.edu