Research
My research interests broadly encompass graph representation learning and geometric deep learning. I am fascinated by questions such as
How important is the graph structure for Graph Neural Networks?
- How critical is the input graph for the downstream task? Can we determine a priori if the input graph contains sufficient information to solve the downstream task? The input graph plays a dual role: it provides the data for the GNN model and also serves as the computational structure upon which message-passing happens. This duality brings up another question: How do we know if we have an optimal computational structure for the learning task? Is the graph structure always relevant?
Generalization for Graph Neural Networks.
- What does memorization mean in the context of GNNs? Do GNNs even memorize? How can we distinguish between memorization and graph structure overfitting?
Applications to biomedical/modeling molecular data.
- Designing deep neural networks for biomedical applications such as modeling protein structures, chemical molecules, accelerated drug-target interaction prediction etc.
Talks
-
Jan 28, 2025 - Had a funtime talking about our recently accepted NeurIPS paper with neptune.ai. The goal was to explain our research using simpler analogies intended for broader audience. The video of the talk is available here.
-
From June 12-14, 2024, I was at Helmholtz AI Conference at Düsseldorf, where I presented our recent work on How to mitigate both over-squashing and over-smoothing in GNNs?. The slides are available here .