Hello! I am an assistant professor at the Computer Science Department, the University of Arizona. From September 2017 to June 2019, I was a postdoctoral researcher at the machine learning group of Microsoft Research NYC. I got my PhD in Computer Science at UCSD, where I was lucky to have Prof. Kamalika Chaudhuri as my advisor. Before this, I was an undergraduate student at Peking University, studying learning theory with Prof. Liwei Wang.

You can reach me by email at chichengz at cs dot arizona dot edu.

To prospective PhD students: you are welcome to apply to our CS or Applied Math, or Statistics PhD programs. I am mostly looking for self-motivated students with solid math background and/or theoretical research experience - please feel free to send me an email if you think there is a match between our research interests.

## Research

My research interests lie in theory and applications of machine learning. I primarily work on interactive learning (e.g. active learning, contextual bandits, reinforcement learning, etc), where learning algorithms are involved in data collection processes. Specifically, I am interested in:

designing and analyzing interactive learning algorithms that have data-efficiency, computational efficiency, and robustness guarantees, as well as

identifying new and natural interaction mechanisms learning algorithms can benefit from.

I am also interested in topics in unsupervised learning, as well as quantifying and utilizing confidence and uncertainty in machine learning.

## Publications

### Conferences

• Active Fairness Auditing.

ICML 2022 (long talk).
• Thompson Sampling for Robust Transfer in Multi-Task Bandits.

ICML 2022.
• Margin-distancing for safe model explanation.

AISTATS 2022.
• Provably Efficient Multi-Task Reinforcement Learning with Model Transfer.

NeurIPS 2021.
• Improved Algorithms for Efficient Active Learning Halfspaces with Massart and Tsybakov noise.

COLT 2021.
• Multitask Bandit Learning through Heterogeneous Feedback Aggregation.

AISTATS 2021.
• Active Online Learning with Hidden Shifting Domains.

AISTATS 2021.
• Attribute-Efficient Learning of Halfspaces with Malicious Noise: Near-Optimal Label Complexity and Noise Tolerance.

ALT 2021.
• Crush Optimism with Pessimism: Structured Bandits Beyond Asymptotic Optimality.

NeurIPS 2020.
• Efficient Contextual Bandits with Continuous Actions.

NeurIPS 2020.
• Efficient Active Learning of Sparse Halfspaces with Arbitrary Bounded Noise.

NeurIPS 2020 (oral presentation).
• Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds.

ICLR 2020 (talk).
• Contextual Bandits with Continuous Actions: Smoothing, Zooming, and Adapting.

COLT 2019.
• Warm-starting Contextual Bandits: Robustly Combining Supervised and Bandit Feedback.

ICML 2019.
• Bandit Multiclass Linear Classification: Efficient Algorithms for the Separable Case.

ICML 2019.
• Efficient Active Learning of Sparse Halfspaces.

COLT 2018.
• Revisiting Perceptron: Efficient and Label-Optimal Learning of Halfspaces.

NeurIPS 2017.
• Efficient Online Bandit Multiclass Learning with $$\tilde O(\sqrt{T})$$ Regret.

ICML 2017.
• Search Improves Label for Active Learning.

NeurIPS 2016.
• The Extended Littlestone's Dimension for Learning with Mistakes and Abstentions.

COLT 2016.
• Active Learning from Weak and Strong Labelers.

NeurIPS 2015.
• Spectral Learning of Large Structured HMMs for Comparative Epigenomics.

NeurIPS 2015.
• Beyond Disagreement-based Agnostic Active Learning.

NeurIPS 2014 (spotlight presentation).

### Workshop Contributions

• A Potential-based Framework for Online Learning with Mistakes and Abstentions.

NeurIPS 2016 workshop on Reliable Machine Learning in the Wild.
• Search Improves Label for Active Learning.

ICML Workshop on Data Efficient Machine Learning 2016.
• Active Learning from Weak and Strong Labelers.

ICML Active Learning Workshop 2015.
• Improved Algorithms for Confidence-Rated Prediction with Error Guarantees.

NeurIPS 2013 Workshop on Learning Faster from Easy Data.

## Teaching

• CSC 588: Machine Learning Theory. Spring 2022.
• CSC 696H: Topics in Reinforcement Learning Theory. Fall 2021.
• CSC 588: Machine Learning Theory. Spring 2021.
• CSC 665 Section 2: Machine Learning Theory. Fall 2019.

## Thesis

• Active Learning and Confidence-rated Prediction.

PhD Thesis, UCSD, 2017.

## Tutorials

• Tutorial on Statistical Foundations of Interactive Learning.

ISIT 2017.