About Me
Hello! I am an assistant professor at the Computer Science Department, the University of Arizona. From September 2017 to June 2019, I was a postdoctoral researcher at the machine learning group of Microsoft Research NYC. I got my PhD in Computer Science at UCSD, where I was lucky to have Prof. Kamalika Chaudhuri as my advisor.
You can reach me by email at chichengz at cs dot arizona dot edu.
To prospective PhD students: you are welcome to apply to our CS or Applied Math, or Statistics PhD programs. I am mostly looking for self-motivated students with solid math background and/or theoretical research experience - please feel free to send me an email if you think there is a match between our research interests.Research
My research interests lie in theory and applications of machine learning. I primarily work on interactive learning (e.g. active learning, contextual bandits, etc), where learning algorithms are involved in data collection processes. Specifically, I am interested in:
Publications
Preprints
-
Improved Algorithms for Efficient Active Learning Halfspaces with Massart and Tsybakov noise.
[arXiv]
Conferences
-
Multitask Bandit Learning through Heterogeneous Feedback Aggregation.
AISTATS 2021.
[arXiv]
-
Active Online Learning with Hidden Shifting Domains.
AISTATS 2021.
[arXiv]
-
Attribute-Efficient Learning of Halfspaces with Malicious Noise: Near-Optimal Label Complexity and Noise Tolerance.
ALT 2021.
[arXiv]
-
Crush Optimism with Pessimism: Structured Bandits Beyond Asymptotic Optimality.
NeurIPS 2020.
[arXiv]
-
Efficient Contextual Bandits with Continuous Actions.
NeurIPS 2020.
[arXiv]
-
Efficient Active Learning of Sparse Halfspaces with Arbitrary Bounded Noise.
NeurIPS 2020 (oral presentation).
[arXiv]
-
Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds.
ICLR 2020 (talk).
[arXiv] [code]
-
Contextual Bandits with Continuous Actions: Smoothing, Zooming, and Adapting.
COLT 2019.
[arXiv]
-
Warm-starting Contextual Bandits: Robustly Combining Supervised and Bandit Feedback.
ICML 2019.
[arXiv] [code]
-
Bandit Multiclass Linear Classification: Efficient Algorithms for the Separable Case.
ICML 2019.
[arXiv]
-
Efficient Active Learning of Sparse Halfspaces.
COLT 2018.
[arXiv]
-
Revisiting Perceptron: Efficient and Label-Optimal Learning of Halfspaces.
NeurIPS 2017.
[arXiv] [poster]
-
Efficient Online Bandit Multiclass Learning with \( \tilde O(\sqrt{T}) \) Regret.
ICML 2017.
[arXiv]
-
Search Improves Label for Active Learning.
NeurIPS 2016.
[arXiv]
-
The Extended Littlestone's Dimension for Learning with Mistakes and Abstentions.
COLT 2016.
[arXiv]
-
Active Learning from Weak and Strong Labelers.
NeurIPS 2015.
[arXiv]
-
Spectral Learning of Large Structured HMMs for Comparative Epigenomics.
NeurIPS 2015.
[arXiv] [code]
-
Beyond Disagreement-based Agnostic Active Learning.
NeurIPS 2014 (spotlight presentation).
[pdf] [arXiv]
Workshop Contributions
-
A Potential-based Framework for Online Learning with Mistakes and Abstentions.
NeurIPS 2016 workshop on Reliable Machine Learning in the Wild.
[pdf]
-
Search Improves Label for Active Learning.
ICML Workshop on Data Efficient Machine Learning 2016.
[pdf]
-
Active Learning from Weak and Strong Labelers.
ICML Active Learning Workshop 2015.
[pdf]
-
Improved Algorithms for Confidence-Rated Prediction with Error Guarantees.
NeurIPS 2013 Workshop on Learning Faster from Easy Data.
[pdf]
Teaching
-
CSC 588: Machine Learning Theory. Spring 2021.
[Class website]
-
CSC 665 Section 2: Machine Learning Theory. Fall 2019.
[Class website]
Thesis
-
Active Learning and Confidence-rated Prediction.
PhD Thesis, UCSD, 2017.
[Local version (with typos fixed)] [UC eScholarship]