The "Test of Time" award has been presented at ICML since 2010 and recognizes those papers that have achieved the greatest impact on the scientific community in the ten years since they were presented at ICML. This year, the award went to Prof. Dr. Pavel Laskov and his co-authors for their paper "Poisoning Attacks against Support Vector Machines," which was ranked as the most significant of a total of 244 papers presented at ICML 2012. The authors join researchers from the University of Amsterdam, ETH Zurich, Harvard University, Amazon Research, INRIA, Facebook Research, Google Brain and DeepMind who have received the award in the past five years.
What was the award-winning article about?
Machine learning algorithms have already established themselves as the main tool for data analysis in various Internet technologies in the early 2000s. Thus, they also play an important role in modern security technologies, e.g., to detect new threats or to debunk ever-changing phishing sites. As early as 2006, however, it was suggested that attackers could overcome learning-based detection techniques by manipulating their data - even though the first such attacks were directed against very simple learning algorithms . "Can attacks using manipulated data also overcome mainstream algorithms, which are significantly more complex?" wondered Laskov, Biggio and Nelson in 2011, who were then working together at the University of Tübingen in Germany. In 2013, the same authors also conceived the first attack that could bypass an already trained model. In 2014, the same phenomenon was independently uncovered in another paper by researchers from Google, New York University and University of Montreal, and interpreted as "amazing properties" of neural networks.
Since then, more than 5000 articles have appeared in journals and conferences examining the security of artificial intelligence. Ongoing work by Pavel Laskov and his colleagues at the Hilti Chair of Data and Application Security also addresses such issues, among others; their most recent article on the security of AI components in 5G network infrastructures appeared in the IEEE Journal on Network and Service Management and was highlighted in news stories by TechTarget, an influential IT marketing service, as a potential new threat to 5G technologies.
What does this research mean for Liechtenstein?
The results awarded in Pavel Laskov's paper are primarily for basic research. They demonstrate that the University of Liechtenstein has the expertise to operate at the highest level worldwide in teaching and further research projects in the fields of cybersecurity and data science. Moreover, the practical relevance for Artificial Intelligent Security is not long in coming. In the European Commission's proposal for the legal framework for Artificial Intelligence, resilience against poisoning and manipulation attacks is mentioned as an essential requirement for the so-called "high-risk applications" of Artificial Intelligence - implementation deadline is expected to be as early as 2024.