Shiyu Chang

Assistant Professor, Ph.D.

UC Santa Barbara

chang87 [AT] ucsb.edu

Bio

Shiyu Chang is an Assistant Professor at UC Santa Barbara, where his research centers on machine learning with applications in natural language processing and computer vision.

Before joining UC Santa Barbara, Shiyu was a research scientist at the MIT-IBM Watson AI Lab, where he worked closely with Prof. Regina Barzilay and Prof. Tommi Jaakkola. He earned both his B.S. and Ph.D. from the University of Illinois at Urbana-Champaign. His Ph.D. advisor is Prof. Thomas S. Huang.

Students

Guanyu Yao ( 2024 - )

Jingbo Yang ( 2024 - )

Li An ( 2024 - )

Xinyi Gao ( 2024 - )

Jiabao Ji ( 2022 - )

Yujian Liu ( 2022 - )

Qiucheng Wu ( 2021 - )

Bairu Hou ( 2021 - )

Guanhua Zhang ( 2021 - 2023 ): now Ph.D. at Max Planck Institute for Intelligent Systems

Jiajun Wang ( 2023 )

Chris Riney ( 2023 - )

Edwin Yee ( 2022 - )

Yifan Ke ( 2023 )

Liliana Nguyen ( 2022 - 2023 )

Shamita Gurusu ( 2022 - 2023 )

Hugo Lin ( 2022 - 2023 )

Wesley Truong ( 2022 - 2023 )

Zhiyuan Ren ( 2021 - 2022 )

Selected Publications

Full publications on Google Scholar.
indicates authors with equal contribution. indicates my students or interns.

Reversing the Forget-Retain Objectives: An Efficient LLM Unlearning Framework from Logit Difference

Jiabao Ji, Yujian Liu, Yang Zhang, Gaowen Liu, Ramana Rao Kompella, Sijia Liu, Shiyu Chang

NeurIPS'24: Advances in Neural Information Processing Systems

Revisiting Who's Harry Potter: Towards Targeted Unlearning from a Causal Intervention Perspective

Yujian Liu, Yang Zhang, Tommi S. Jaakkola, Shiyu Chang

EMNLP'24: Conference on Empirical Methods in Natural Language Processing

Defending Large Language Models against Jailbreak Attacks via Semantic Smoothing

Jiabao Ji☆ ‡, Bairu Hou☆ ‡, Alexander Robey, George J. Pappas, Hamed Hassani, Yang Zhang, Eric Wong, Shiyu Chang

ArXiv Preprint

Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling

Bairu Hou Yujian Liu, Kaizhi Qian, Jacob Andreas, Shiyu Chang, Yang Zhang

ICML'24: International Conference on Machine Learning

Advancing the Robustness of Large Language Models through Self-Denoised Smoothing

Jiabao Ji☆ ‡, Bairu Hou☆ ‡, Zhen Zhang, Guanhua Zhang☆ ‡, Wenqi Fan, Qing Li, Yang Zhang, Gaowen Liu, Sijia Liu, Shiyu Chang

NAACL'24: Annual Conference of the North American Chapter of the Association for Computational Linguistics

Correcting Diffusion Generation through Resampling

Yujian Liu, Yang Zhang, Tommi S. Jaakkola, Shiyu Chang

CVPR'24: IEEE Computer Vision and Pattern Recognition

Reversing the Forget-Retain Objectives: An Efficient LLM Unlearning Framework from Logit Difference

Jiabao Ji, Yujian Liu, Yang Zhang, Gaowen Liu, Ramana Rao Kompella, Sijia Liu, Shiyu Chang

NeurIPS'24: Advances in Neural Information Processing Systems

Revisiting Who's Harry Potter: Towards Targeted Unlearning from a Causal Intervention Perspective

Yujian Liu, Yang Zhang, Tommi S. Jaakkola, Shiyu Chang

EMNLP'24: Conference on Empirical Methods in Natural Language Processing

Defending Large Language Models against Jailbreak Attacks via Semantic Smoothing

Jiabao Ji☆ ‡, Bairu Hou☆ ‡, Alexander Robey, George J. Pappas, Hamed Hassani, Yang Zhang, Eric Wong, Shiyu Chang

ArXiv Preprint

Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling

Bairu Hou Yujian Liu, Kaizhi Qian, Jacob Andreas, Shiyu Chang, Yang Zhang

ICML'24: International Conference on Machine Learning

Advancing the Robustness of Large Language Models through Self-Denoised Smoothing

Jiabao Ji☆ ‡, Bairu Hou☆ ‡, Zhen Zhang, Guanhua Zhang☆ ‡, Wenqi Fan, Qing Li, Yang Zhang, Gaowen Liu, Sijia Liu, Shiyu Chang

NAACL'24: Annual Conference of the North American Chapter of the Association for Computational Linguistics

Correcting Diffusion Generation through Resampling

Yujian Liu, Yang Zhang, Tommi S. Jaakkola, Shiyu Chang

CVPR'24: IEEE Computer Vision and Pattern Recognition

Improving Diffusion Models for Scene Text Editing with Dual Encoders

Jiabao Ji☆ ‡, Guanhua Zhang☆ ‡, Zhaowen Wang, Bairu Hou, Zhifei Zhang, Brian Price, Shiyu Chang

TMLR'24: Transactions on Machine Learning Research

Harnessing the Spatial-Temporal Attention of Diffusion Models for High-Fidelity Text-to-Image Synthesis

Qiucheng Wu☆ ‡, Yujian Liu☆ ‡, Handong Zhao, Trung Bui, Zhe Lin, Yang Zhang, Shiyu Chang

ICCV'23: International Conference on Computer Vision

Towards Coherent Image Inpainting Using Denoising Diffusion Implicit Models

Guanhua Zhang☆ ‡, Jiabao Ji☆ ‡, Yang Zhang, Mo Yu, Tommi S. Jaakkola, Shiyu Chang

ICML'23: International Conference on Machine Learning

PromptBoosting: Black-Box Text Classification with Ten Forward Passes

Bairu Hou, Joe O'Connor, Jacob Andreas, Shiyu Chang, Yang Zhang

ICML'23: International Conference on Machine Learning

Uncovering the Disentanglement Capability in Text-to-Image Diffusion Models

Qiucheng Wu, Yujian Liu, Handong Zhao, Ajinkya Kale, Trung Bui, Tong Yu, Zhe Lin, Yang Zhang, Shiyu Chang

CVPR'23: IEEE Computer Vision and Pattern Recognition

TextGrad: Advancing Robustness Evaluation in NLP by Gradient-Driven Optimization

Bairu Hou, Jinghan Jia, Yihua Zhang, Guanhua Zhang☆ ‡, Yang Zhang, Sijia Liu, Shiyu Chang

ICLR'23: International Conference on Learning Representations

Fairness Reprogramming

Guanhua Zhang☆ ‡, Yihua Zhang, Yang Zhang, Wenqi Fan, Qing Li, Sijia Liu, Shiyu Chang

NeurIPS'22: Advances in Neural Information Processing Systems

Revisiting and Advancing Fast Adversarial Training Through The Lens of Bi-Level Optimization

Yihua Zhang, Guanhua Zhang☆ ‡, Prashant Khanduri, Mingyi Hong, Shiyu Chang, Sijia Liu

ICML'22: International Conference on Machine Learning

Learning Stable Classifiers by Transferring Unstable Features

Yujia Bao, Shiyu Chang, Regina Barzilay

ICML'22: International Conference on Machine Learning

ContentVec: An Improved Self-Supervised Speech Representation by Disentangling Speakers

Kaizhi Qian, Yang Zhang, Heting Gao, Junru Ni, Cheng-I Lai, David Cox, Mark A. Hasegawa-Johnson, Shiyu Chang

ICML'22: International Conference on Machine Learning

Data-Efficient Double-Win Lottery Tickets from Robust Pre-training

Tianlong Chen, Zhenyu Zhang, Sijia Liu, Yang Zhang, Shiyu Chang, Zhangyang Wang

ICML'22: International Conference on Machine Learning

Linearity Grafting: How Neuron Pruning Helps Certifiable Robustness

Tianlong Chen, Huan Zhang, Zhenyu Zhang, Shiyu Chang, Sijia Liu, Pin-Yu Chen, Zhangyang Wang

ICML'22: International Conference on Machine Learning

DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings

Yung-Sung Chuang, Rumen Dangovski, Hongyin Luo, Yang Zhang, Shiyu Chang, Marin Soljačić, Shang-Wen Li, Wen-tau Yih, Yoon Kim, James Glass

NAACL'22: Annual Conference of the North American Chapter of the Association for Computational Linguistics

Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free

Tianlong Chen, Zhenyu Zhang, Yihua Zhang, Shiyu Chang, Sijia Liu, Zhangyang Wang

CVPR'22: IEEE Computer Vision and Pattern Recognition

Query and Extract: Refining Event Extraction as Type-oriented Binary Decoding

Sijia Wang, Mo Yu, Shiyu Chang, Lichao Sun, Lifu Huang

ACL-Finding'22: Annual Meeting of the Association for Computational Linguistics

How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

Yimeng Zhang, Yuguang Yao, Jinghan Jia, Jinfeng Yi, Mingyi Hong, Shiyu Chang, Sijia Liu

ICLR'22: International Conference on Learning Representations

Adversarial Support Alignment

Shangyuan Tong☆ ‡, Timur Garipov, Yang Zhang, Shiyu Chang, Tommi S. Jaakkola

ICLR'22: International Conference on Learning Representations

Optimizer Amalgamation

Tianshu Huang, Tianlong Chen, Sijia Liu, Shiyu Chang, Lisa Amini, Zhangyang Wang

ICLR'22: International Conference on Learning Representations

Understanding Interlocking Dynamics of Cooperative Rationalization

Mo Yu, Yang Zhang, Shiyu Chang, Tommi S. Jaakkola

NeurIPS'21: Advances in Neural Information Processing Systems

TransGAN: Two Pure Transformers Can Make One Strong GAN, and That Can Scale Up

Yifan Jiang, Shiyu Chang, Zhangyang Wang

NeurIPS'21: Advances in Neural Information Processing Systems

PARP: Prune, Adjust and Re-Prune for Self-Supervised Speech Recognition

Cheng-I Lai, Yang Zhang, Alexander Liu, Shiyu Chang, Yi-Lun Liao, Yung-Sung Chuang, Kaizhi Qian, Sameer Khurana, David Cox, James Glass

NeurIPS'21: Advances in Neural Information Processing Systems

Predict then Interpolate: A Simple Algorithm to Learn Stable Classifiers

Yujia Bao, Shiyu Chang, Regina Barzilay

ICML'21: International Conference on Machine Learning

The Lottery Ticket Hypothesis for Pre-trained BERT Networks

Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Zhangyang Wang, Michael Carbin

NeurIPS'20: Advances in Neural Information Processing Systems

Invariant Rationalization

Shiyu Chang, Yang Zhang, Mo Yu, Tommi S. Jaakkola

ICML'20: International Conference on Machine Learning

Unsupervised Speech Decomposition via Triple Information Bottleneck

Kaizhi Qian, Yang Zhang, Shiyu Chang, David Cox, Mark A. Hasegawa-Johnson

ICML'20: International Conference on Machine Learning

Few-shot Text Classification with Distributional Signatures

Yujia Bao☆ ‡, Menghua Wu☆ ‡, Shiyu Chang, Regina Barzilay

ICLR'20: International Conference on Learning Representations

A Game Theoretic Approach to Class-wise Selective Rationalization

Shiyu Chang, Yang Zhang, Mo Yu, Tommi S. Jaakkola

NeurIPS'19: Advances in Neural Information Processing Systems

A Stratified Approach to Robustness for Randomly Smoothed Classifiers

Guang-He Lee, Yang Yuan, Shiyu Chang, Tommi S. Jaakkola

NeurIPS'19: Advances in Neural Information Processing Systems

Rethinking Cooperative Rationalization: Introspective Extraction and Complement Control

Mo Yu, Shiyu Chang, Yang Zhang, Tommi S. Jaakkola

EMNLP'19: Conference on Empirical Methods in Natural Language Processing

AutoVC: Zero-Shot Voice Style Transfer with Only Autoencoder Loss

Kaizhi Qian, Yang Zhang, Shiyu Chang, Xuesong Yang, Mark A. Hasegawa-Johnson

ICML'19: International Conference on Machine Learning

Deriving Machine Attention from Human Rationales

Yujia Bao, Shiyu Chang, Mo Yu, Regina Barzilay

EMNLP'18: Conference on Empirical Methods in Natural Language Processing

Image Super-Resolution via Dual-State Recurrent Networks

Wei Han☆ ‡, Shiyu Chang, Ding Liu, Mo Yu, Michael Witbrock, Thomas S. Huang

CVPR'18: IEEE Computer Vision and Pattern Recognition

Dilated Recurrent Neural Networks

Shiyu Chang, Yang Zhang, Wei Han, Mo Yu, Xiaoxiao Guo, Wei Tan, Xiaodong Cui, Michael Witbrock, Mark A. Hasegawa-Johnson, Thomas S. Huang

NIPS'17: Advances in Neural Information Processing Systems

Streaming Recommender Systems

Shiyu Chang, Yang Zhang, Jiliang Tang, Dawei Yin, Yi Chang, Mark A. Hasegawa-Johnson, Thomas S. Huang

WWW'17: ACM International World Wide Web Conference

Positive-Unlabeled Learning in Streaming Networks

Shiyu Chang, Yang Zhang, Jiliang Tang, Dawei Yin, Yi Chang, Mark A. Hasegawa-Johnson, Thomas S. Huang

KDD'16: ACM SIGKDD Conference on Knowledge Discovery and Data Mining

Heterogeneous Network Embedding via Deep Architectures

Shiyu Chang, Wei Han, Jiliang Tang, Guo-Jun Qi, Charu C. Aggarwal, Thomas S. Huang

KDD'15: ACM SIGKDD Conference on Knowledge Discovery and Data Mining

Factorized Similarity Learning in Networks

Shiyu Chang, Guo-Jun Qi, Charu C. Aggarwal, Jiayu Zhou, Meng Wang, Thomas S. Huang

ICDM'14: IEEE International Conference on Data Mining

Learning Locally-Adaptive Decision Functions for Person Verification

Zhen Li, Shiyu Chang, Feng Liang, Thomas S. Huang, Liangliang Cao, John R. Smith

CVPR'13: IEEE Computer Vision and Pattern Recognition

Teaching

Misc

- Some words keep me moving forward:

"A job well done is its own reward. You take pride in the things you do, not for others to see, not for the respect, or glory, or any other rewards it might bring. You take pride in what you do, because you're doing your best. If you believe in something, you stick with it. When things get difficult, you try harder."

This website was built with jekyll based on a template by Martin Saveski. Some layouts are inspired by the design of David Alvarez-Melis and Jiayu Zhou.