Maestro: Multi-Level Attack and Defense Simulation Environment for Artificial Intelligence Education and Research
Summary
Artificial intelligence (AI) techniques, particularly machine learning (ML), are increasingly integrated into safety- and security-critical applications such as autonomous vehicles and malware detection. However, research has shown AI techniques can be vulnerable to cyber-attacks such as adversarial perturbation and data poisoning, potentially leading to catastrophic outcomes when decisions made by AI systems are manipulated. This project aims to promote robust AI with synergistic efforts in AI, cybersecurity, and education. 1) A new platform named Maestro will be developed, which provides a unified environment to simulate and evaluate attacks and defenses on AI. 2) Maestro will be integrated into undergraduate and graduate courses at the University of California, Irvine and made publicly available to researchers and educators. 3) Maestro will be leveraged to conduct new research activities related to robust AI, such as on application domains that are currently underserved like malware detection.
People
- Zhou Li. PI on this project, project leader and professor (UCI EECS).
- Sergio Gago-Masague. Co-PI on this project, project leader and professor (UCI CS).
- Sameer Singh. Co-PI on this project, project leader and professor (UCI CS).
- Junlin Wang. Student Researcher (UCI CS).
- Jiacen Xu. Student Researcher (UCI EECS).
- Hamza Errahmouni Barkam. Student Researcher (UCI CS).
- Margarita Geleta. Student Researcher (UCI CS).
- Manikanta Loya. Student Researcher (UCI CS).
- Ishana Patel. Student Researcher (UCI CS).
Project Timeline
Task | Projected Year | Status |
---|---|---|
Create and open a project web site. | Year 1 | done |
Set up a GitHub repo for Maestro and release part of code. | Year 1 | done |
Create the user interface of Maestro. | Year 1 | done |
Implement the attacks against text data. | Year 1 | done |
Implement the attacks against image data. | Year 1 | done |
Implement the attacks against cyber-security applications. | Year 1 | done |
Implement the defenses. | Year 1 | done |
Create the course syllabus for a project-based course CS 175 at UCI. | Year 2 | done |
Teach CS 175 at UCI. | Year 2 | done |
Collect and evaluate students feedback of CS 175. | Year 2 | done |
Attend NSF SaTC PI meeting. | Year 2 | done |
Submit papers about Maestro (platform, education, etc.). | Year 2 | done |
Update the Maestro repo with well-organized instructions. | Year 3 | done |
Attend NSF EDU PI meeting. | Year 3 | done |
Publications
- [DSN23] Jiacen Xu, Zhe Zhou, Boyuan Feng, Yufei Ding and Zhou Li.
On Adversarial Robustness of Point Cloud Semantic Segmentation.
In Proceedings of the 53rd Annual IEEE/IFIP International Conference on Dependable Systems and Network, June, 2023. - [SIGCSE23] Margarita Geleta, Jiacen Xu, Manikanta Loya, Junlin Wang, Sameer Singh, Zhou Li and Sergio Gago Masague.
Design Factors of Maestro: A Serious Game for Robust AI Education (poster).
In Proceedings of the Technical Symposium on Computer Science Education, Toronto, Canada, March, 2023. - [EAAI23]Margarita Geleta, Jiacen Xu, Manikanta Loya, Junlin Wang, Sameer Singh, Zhou Li and Sergio Gago Masague.
Maestro: A Gamified Platform for Teaching AI Robustness.
In Proceedings of the 13th AAAI Symposium on Educational Advances in Artificial Intelligence, February, 2023. - [ACSAC22] Qifan Zhang, Junjie Shen, Mingtian Tan, Zhe Zhou, Zhou Li, Qi Alfred Chen and Haipeng Zhang.
Play the Imitation Game: Model Extraction Attack against Autonomous Driving Localization.
In Proceedings of the 38th Annual Computer Security Applications Conference, December, 2022. - [ACSAC21] Mingtian Tan, Zhe Zhou and Zhou Li.
The Many-faced God: Attacking Face Verification System with Embedding and Image Recovery.
In Proceedings of the 37th Annual Computer Security Applications Conference, online, December, 2021. - [TIFS] Yicheng Zhang, Rozhin Yasaei, Hao Chen, Zhou Li, and Mohammad Abdullah Al Faruque.
Stealing Neural Network Structure through Remote FPGA Side-channel Analysis.
In IEEE Transactions on Information Forensics and Security, 2021. - [NACCL21] Eric Wallace, Tony Zhao, Shi Feng, and Sameer Singh.
Concealed Data Poisoning Attacks on NLP Models.
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2021.
Outreach
- Course taught at UCI: CS 175 (Winter 2022 and Spring 2022)
- Poster and highlight slides at SaTC PI meeting 2022.