Hello! I am a postdoctoral researcher at Microsoft Research NYC. I got my PhD in Computer Science at UCSD, where I was lucky to have Prof. Kamalika Chaudhuri as my advisor. Before that, I was an undergraduate student in Department of Machine Intelligence, School of EECS, Peking University, China, where I spent a great time studying machine learning theory with Prof. Liwei Wang. In Summer 2015, I was a research intern at Yahoo Labs NYC, mentored by Dr. Alina Beygelzimer. In Summer 2016, I did a second internship at Yahoo Research NYC, working with Dr. Alina Beygelzimer and Dr. Francesco Orabona.

My (slightly outdated) CV can be found here.

## Research

My research interests lie in both theory and applications of machine learning. I primarily work on interactive learning (e.g. active learning, contextual bandits, etc), where learning algorithms are involved in the data collection process. Specifically, I am interested in:

designing and analyzing interactive learning algorithms that have data-efficiency, computational efficiency, and robustness guarantees, as well as

identifying new interaction models which learning algorithms can benefit from.

I am also interested in topics in unsupervised learning, as well as quantifying and utilizing confidence in machine learning.

## Publications

### Preprints

• Bandit Multiclass Linear Classification: Efficient Algorithms for the Separable Case.

• Contextual Bandits with Continuous Actions: Smoothing, Zooming, and Adapting.

• Warm-starting Contextual Bandits: Robustly Combining Supervised and Bandit Feedback.

• Spectral Learning of Binomial HMMs for DNA Methylation Data.

### Conferences

• Efficient Active Learning of Sparse Halfspaces.

COLT 2018.
• Revisiting Perceptron: Efficient and Label-Optimal Learning of Halfspaces.

NIPS 2017.
• Efficient Online Bandit Multiclass Learning with $$\tilde O(\sqrt{T})$$ Regret.

ICML 2017.
• Search Improves Label for Active Learning.

NIPS 2016.
• The Extended Littlestone's Dimension for Learning with Mistakes and Abstentions.

COLT 2016.
• Active Learning from Weak and Strong Labelers.

NIPS 2015.
• Spectral Learning of Large Structured HMMs for Comparative Epigenomics.

NIPS 2015.
• Beyond Disagreement-based Agnostic Active Learning.

NIPS 2014.

### Workshop Contributions

• A Potential-based Framework for Online Learning with Mistakes and Abstentions.

NIPS 2016 workshop on Reliable Machine Learning in the Wild.
• Search Improves Label for Active Learning.

ICML Workshop on Data Efficient Machine Learning 2016.
• Active Learning from Weak and Strong Labelers.

ICML Active Learning Workshop 2015.
• Improved Algorithms for Confidence-Rated Prediction with Error Guarantees.

NIPS 2013 Workshop on Learning Faster from Easy Data.

## Thesis

• Active Learning and Confidence-rated Prediction.

PhD Thesis, UCSD, 2017.

See also my research exam report on the same topic.