Publications

Preprints

  • Ensemble-based Interactive Imitation Learning.
    Yichen Li and Chicheng Zhang.
    [arXiv]
  • Efficient Low-Rank Matrix Estimation, Experimental Design, and Arm-Set-Dependent Low-Rank Bandits.
    Kyoungseok Jang, Chicheng Zhang, Kwang-Sung Jun.
    [arXiv]

Conferences

  • The Human-AI Substitution game: active learning from a strategic labeler.
    Tom Yan and Chicheng Zhang.
    ICLR 2024 (to appear).
  • Efficient Active Learning Halfspaces with Tsybakov Noise: A Non-convex Optimization Approach.
    Yinan Li and Chicheng Zhang.
    AISTATS 2024 (to appear).
    [arXiv]
  • Kullback-Leibler Maillard Sampling for Multi-armed Bandits with Bounded Rewards.
    Hao Qin, Kwang-Sung Jun, Chicheng Zhang.
    NeurIPS 2023.
    [arXiv]
  • Fair coexistence of heterogeneous networks: A novel probabilistic multi-armed bandit approach.
    Zhiwu Guo, Chicheng Zhang, Ming Li, and Marwan Krunz.
    WiOpt 2023.
    [link]
  • Hierarchical Unimodal Bandits.
    Tianchi Zhao, Chicheng Zhang, and Ming Li.
    ECML-PKDD 2022.
    [link]
  • On Efficient Online Imitation Learning via Classification.
    Yichen Li and Chicheng Zhang.
    NeurIPS 2022.
    [arXiv] [Talk Slides]
  • PopArt: Efficient Sparse Regression and Experimental Design for Optimal Sparse Linear Bandits.
    Kyoungseok Jang, Chicheng Zhang, Kwang-Sung Jun.
    NeurIPS 2022.
    [arXiv] [Talk Slides]
  • Active Fairness Auditing.
    Tom Yan and Chicheng Zhang.
    ICML 2022 (outstanding paper runner-up).
    [arXiv] [Slides]
  • Margin-distancing for safe model explanation.
    Tom Yan and Chicheng Zhang.
    AISTATS 2022.
    [arXiv]
  • Provably Efficient Multi-Task Reinforcement Learning with Model Transfer.
    Chicheng Zhang and Zhi Wang.
    NeurIPS 2021.
    [arXiv] [Poster] [slides]
  • Improved Algorithms for Efficient Active Learning Halfspaces with Massart and Tsybakov noise.
    Chicheng Zhang and Yinan Li.
    COLT 2021.
    [arXiv] [slides] [poster]
  • Multitask Bandit Learning through Heterogeneous Feedback Aggregation.
    Zhi Wang, Chicheng Zhang, Manish Kumar Singh, Laurel D. Riek, and Kamalika Chaudhuri.
    AISTATS 2021.
    [arXiv] [Spotlight talk (by Zhi)]
  • Attribute-Efficient Learning of Halfspaces with Malicious Noise: Near-Optimal Label Complexity and Noise Tolerance.
    Jie Shen and Chicheng Zhang.
    ALT 2021.
    [arXiv]
  • Crush Optimism with Pessimism: Structured Bandits Beyond Asymptotic Optimality.
    Kwang-Sung Jun and Chicheng Zhang.
    NeurIPS 2020.
    [arXiv]
  • Efficient Contextual Bandits with Continuous Actions.
    Maryam Majzoubi, Chicheng Zhang, Rajan Chari, Akshay Krishnamurthy, John Langford, and Aleksandrs Slivkins.
    NeurIPS 2020.
    [arXiv] [poster] [Long talk]
  • Efficient Active Learning of Sparse Halfspaces with Arbitrary Bounded Noise.
    Chicheng Zhang, Jie Shen, and Pranjal Awasthi.
    NeurIPS 2020 (oral presentation).
    [arXiv] [talk]
  • Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds.
    Jordan T. Ash, Chicheng Zhang, Akshay Krishnamurthy, John Langford and Alekh Agarwal.
    ICLR 2020 (talk).
    [arXiv] [code]
  • Contextual Bandits with Continuous Actions: Smoothing, Zooming, and Adapting.
    Akshay Krishnamurthy, John Langford, Aleksandrs Slivkins and Chicheng Zhang.
    COLT 2019.
    [arXiv]
  • Warm-starting Contextual Bandits: Robustly Combining Supervised and Bandit Feedback.
    Chicheng Zhang, Alekh Agarwal, Hal Daumé III, John Langford, and Sahand N Negahban.
    ICML 2019.
    [arXiv] [code]
  • Bandit Multiclass Linear Classification: Efficient Algorithms for the Separable Case.
    Alina Beygelzimer, Dávid Pál, Balázs Szörényi, Devanathan Thiruvenkatachari, Chen-Yu Wei, and Chicheng Zhang.
    ICML 2019.
    [arXiv]
  • Efficient Active Learning of Sparse Halfspaces.
    Chicheng Zhang.
    COLT 2018.
    [arXiv]
  • Revisiting Perceptron: Efficient and Label-Optimal Learning of Halfspaces.
    Songbai Yan and Chicheng Zhang.
    NeurIPS 2017.
    [arXiv] [poster]
  • Efficient Online Bandit Multiclass Learning with \( \tilde O(\sqrt{T}) \) Regret.
    Alina Beygelzimer, Francesco Orabona, and Chicheng Zhang.
    ICML 2017.
    [arXiv]
  • Search Improves Label for Active Learning.
    Alina Beygelzimer, Daniel Hsu, John Langford, and Chicheng Zhang.
    NeurIPS 2016.
    [arXiv]
  • The Extended Littlestone's Dimension for Learning with Mistakes and Abstentions.
    Chicheng Zhang and Kamalika Chaudhuri.
    COLT 2016.
    [arXiv]
  • Active Learning from Weak and Strong Labelers.
    Chicheng Zhang and Kamalika Chaudhuri.
    NeurIPS 2015.
    [arXiv]
  • Spectral Learning of Large Structured HMMs for Comparative Epigenomics.
    Chicheng Zhang, Jimin Song, Kevin C. Chen, and Kamalika Chaudhuri.
    NeurIPS 2015.
    [arXiv] [code]
  • Beyond Disagreement-based Agnostic Active Learning.
    Chicheng Zhang and Kamalika Chaudhuri.
    NeurIPS 2014 (spotlight presentation).
    [pdf] [arXiv]

Workshop Contributions

  • A Potential-based Framework for Online Learning with Mistakes and Abstentions.
    Chicheng Zhang and Kamalika Chaudhuri.
    NeurIPS 2016 workshop on Reliable Machine Learning in the Wild.
    [pdf]
  • Search Improves Label for Active Learning.
    Alina Beygelzimer, Daniel Hsu, John Langford and Chicheng Zhang.
    ICML Workshop on Data Efficient Machine Learning 2016.
    [pdf]
  • Active Learning from Weak and Strong Labelers.
    Chicheng Zhang and Kamalika Chaudhuri.
    ICML Active Learning Workshop 2015.
    [pdf]
  • Improved Algorithms for Confidence-Rated Prediction with Error Guarantees.
    Kamalika Chaudhuri and Chicheng Zhang.
    NeurIPS 2013 Workshop on Learning Faster from Easy Data.
    [pdf]

Thesis