Projects

1. Robust AI Guided by the Immune System

Driven by rapid advances in neural networks (NNs), artificial intelligence has achieved remarkable success in many fields. However, small perturbations invisible to humans can be purposely added to inputs to cause NNs to make incorrect predictions. What is more, attackers can even customize different perturbation strategies to bypass existing NNs’ learning methods and defenses. Thus, one open question is how to make NNs more robust to such adversarial perturbations. Humans have a highly evolved immune system that can defend against multiple threats, even those never encountered before. Inspired by the powerful immune system, this project aims to infuse key immune system principles into NNs to reduce the substantial gap between existing machine-centric robust learning frameworks and robust immune models. The project requires techniques such as population-based optimization, robust training, knowledge distillation, etc.

2. Robust AI with Theoretical Guarantee

The project aims at (1) developing mathematical tools for supporting reliable implementations of AI algorithms (2) developing AI algorithms that are capable of making decisions and performing tasks in the face of uncertainties and perturbations. The project seeks to provide theoretical guarantees for the robustness of these algorithms, ensuring that they are able to maintain their performance and accuracy even when faced with unexpected and malicious inputs in an unreliable environment. The project involves the use of advanced techniques from mathematics, statistics, and computer science to develop algorithms that are capable of making AI systems more reliable, secure, and trustworthy. The ultimate goal of the project is to develop AI systems that are robust and reliable in real-world scenarios, enabling their safe and ethical deployment across a wide range of industries and applications.

3. Backdoor Detection and Mitigation

Backdoor attacks are a category of attacks on deep learning models where an attacker inserts a hidden trigger pattern into the training data, which can cause the model to misclassify inputs containing that pattern. Backdoor attacks can be difficult to detect and mitigate. This porject aims to detect and mitigate backdoor in different learning phases (data processing, training, and inference).

4. Privacy protection in AI

One big concern is the exposure of privacy brought on by the data-intensive nature of AI, which may cause destructive outcomes, e.g., allowing hackers to lock you out of your house that uses smart locks. There are two fundamental challenges to preserve privacy in AI: (1) Privacy protection often stands at odds with AI’s data requirements; (2) Privacy must be considered along the entire end-to-end pipeline - from data ingestion to model applications. The project will involve developing methods to preserve privacy throughout the AI workflow without compromising state-of-the-art performance.

5. Data-Driven Methods in Power System with Physics-Constraints

The project aims at developing advanced machine learning and data-driven algorithms to optimize power system operations while respecting physical constraints. The project seeks to leverage large amounts of data from various sources, including PMU, SCADA, and weather forecasts, to predict and optimize power system behavior and improve energy efficiency. The project also aims to incorporate physics-based models and constraints to ensure that the optimized solutions are safe, reliable, and stable. The ultimate goal of the project is to develop scalable and efficient data-driven methods that can help transform the power sector towards a more sustainable and resilient future.