I am a research scientist at IBM Research, Cambridge and the MIT-IBM Watson AI lab. I develop statistical models to understand and explain images, text, and real-world healthcare data and evaluate their robustness to modeling and data perturbations.
I hold a Ph.D. in Computer Science from Brown University, where I was advised by Erik Sudderth. Before Brown, I spent a few years in beautiful Boulder getting a master’s degree from the University of Colorado. At Colorado, I was advised by Jane Mulligan. Going further back, I went to the University of Mumbai (Bombay) (KJSCE) as an undergrad. I also spent a year as a postdoctoral scientist at the now defunct Disney research, Cambridge.
Recent Highlights
- ICML 2024 paper on calibrating large language models. Requires only a single forward pass through the llm and can learn to calibrate without needing labeled data. Here is a nice MIT news article providing a high level gist of the work.
- New NeurIPS 2022 paper on improving fairness of pre-trained classifiers by detecting and dropping training instances that contribute to unfairness. Here is a short video and a blog post describing this work.
- Are Gaussian process (GP) predictions sensitive to the choice of the kernel? Sometimes! We show how to check for sensitivity / robustness of GP based analysis in this AISTATS paper.
- Excited about our comprehensive toolbox for uncertainty quantification. Read more here.
- NeurIPS paper on fast and accurate approximations to cross-validation and jackknife for models with spatial and temporal structure.
- New work documenting our progress on building statistical models of the progression of Parkinson’s disease appeared at MLHC 2020. Our work was highlighted in several popular media articles. DigitalTrends, VentureBeat, TechRepublic.
- A comprehensive overview of learning Bayesian neural networks with Horseshoe priors will appear in JMLR. Code available here.