CyLab Researchers Develop Taxonomy for AI Privacy Risks

Michael CunninghamFriday, April 19, 2024

AI practitioners, such as the developers of the technologies behind smart speakers, can use a new taxonomy from HCII researchers as a guide to help protect privacy when they're creating products and services.

Privacy is paramount when developing ethical artificial intelligence technologies. But as advances in AI outpace regulation, the responsibility for reducing privacy risks in goods and services that incorporate these technologies falls primarily on developers.

Tangibly defining AI-driven privacy risks so developers can address them early in the research and development process is vital. While a privacy taxonomy — a way of organizing data — with a well-established, research-driven foundation exists, it's likely that groundbreaking AI advancement will bring with it unprecedented privacy risks.

New research from the School of Computer Science's Human-Computer Interaction Institute (HCII) hopes to mitigate these risks.

"Practitioners need more guidance on how to protect privacy when they're creating AI products and services," said HCII Assistant Professor Sauvik Das. "There's a lot of hype about what risks AI does or doesn't pose and what it can or can't do. But there's not a definitive resource on how modern advances in AI change privacy risks in some meaningful way, if at all."

In their paper, "Deep Fakes, Phrenology, Surveillance and More! A Taxonomy of AI Privacy Risks," Das and a team of researchers seek to build the foundation for this definitive resource.

The research team constructed a taxonomy of AI privacy risks by analyzing 321 documented AI privacy incidents. The team aimed to codify how the unique capabilities and requirements of AI technologies described in those incidents generated new privacy risks, exacerbated known ones, or did not meaningfully alter known risks.

Das and the team used previous work in this area as a baseline taxonomy of traditional privacy risks predating modern advances in AI. They then cross-referenced the documented AI privacy incidents to see how, and if, they fit within the previous taxonomy. The team identified 12 high-level privacy risks that AI technologies either created or exacerbated. 

"Our hope is that this taxonomy gives practitioners a clear roadmap of the types of privacy risks that AI, specifically, can entail," Das said.

Das and the team will present their findings at the Association for Computing Machinery's (ACM's) Conference on Human Factors in Computing Systems (CHI 2024) in May in Honolulu.

Read more about their work on the CMU CyLab Security and Privacy Institute website.

For More Information

Aaron Aupperlee | 412-268-9068 | aaupperlee@cmu.edu