-
🔭 I’m currently pursuing my Ph.D. in the Department of Computer Science at the University of Texas at Dallas, supervised by Prof. Feng Chen. I was a research intern at Meta AI (and incoming Research Scientist Intern at Meta AI for Summer 2024) and Bosch Center for AI. During my tenure at Meta AI, I contributed to a multimodal recurring transfer learning project. Subsequently, at BCAI, I focused on the sparsity of the Multimodal Foundation Models (Vision-Language Models).
-
🌱 My research is in the field of Deep Learning, with a focus on low-resource learning (meta/few-shot/semi/self-supervised learning), uncertainty estimation, robustness, and efficiency in traditional CNNs and vision-language models. My goal is to ensure AI systems safe, robust, reliable and trustworthy. I’m also very interested in Generative AI (large language models (LLMs), and diffusion models), and exploring along these directions.
-
Robust Meta-Learning
- PLATINUM: Semi-Supervised Model Agnostic Meta-Learning using Submodular Mutual Information.
Changbin Li, Suraj Kothawade, Feng Chen, Rishabh Iyer.
International Conference on Machine Learning (ICML), 2022. - A Nested Bi-Level Optimization for Robust Few Shot Learning.
Krishnateja Killamsetty*, Changbin Li*, Chen Zhao, Rishabh Iyer, Feng Chen. (* Equal Contribution)
AAAI Conference on Artificial Intelligence (AAAI), 2022.
- PLATINUM: Semi-Supervised Model Agnostic Meta-Learning using Submodular Mutual Information.
-
Uncertainty Quantification
- Hyper Evidential Deep Learning to Quantify Composite Classification Uncertainty.
Changbin Li, Kangshuo Li, Yuzhe Ou, Lance M. Kaplan, Audun Jøsang, Jin-Hee Cho, DONG HYUN JEONG, Feng Chen.
International Conference on Learning Representations (ICLR), 2024
- Hyper Evidential Deep Learning to Quantify Composite Classification Uncertainty.
-
Current Work:
- Currently investigating uncertainty quantification in vision-language model and LLMs to improve their decision-making robustness and safety.
-
-
💬 I'd like to write some blogs sometimes, including Paper Notes and Study Notes. Recently, I am summarizing my previous notes on Generative AI.
-
👯 I am an incoming Research Scientist Intern at Meta AI this summer, and I’m seeking full-time roles for the end of 2024.
-
📫 Feel free to contact me via email at [email protected].
🐳
CS PhD in AI/ML | Ex-Research Intern @ Meta AI @ Bosch Center for AI
-
UT Dallas
- Richardson TX
-
19:37
(UTC -06:00) - in/changbin-li
- https://scholar.google.com/citations?user=dtWUP50AAAAJ&hl=en
- https://openreview.net/profile?id=~Changbin_Li1
Popular repositories Loading
-
-
-
ups
ups PublicForked from nayeemrizve/ups
"In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label Selection Framework for Semi-Supervised Learning" by Mamshad Nayeem Rizve, Kevin Duarte, Yogesh S Rawat, Mubarak Shah (ICLR 2021)
Python 1
-
-
Brynhildr
Brynhildr PublicForked from jslu0418/Brynhildr
A repository for team project of CS6360
JavaScript
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.