Research Scientist
AryaXAI.com
Working at the intersection of explainable AI (XAI), AI alignment, and safety in high-stakes domains—focusing on interpreting black-box models, evaluating the reliability of XAI methods, and developing in-house foundation models for tabular data, particularly for fraud detection and other mission-critical applications. Contributed to the development of the DLBacktrace explainability method and co-created a benchmarking library for evaluating state-of-the-art XAI techniques. Explored model-agnostic AI alignment and optimization using post-hoc explainability methods, with experimental work across a range of architectures including CNNs (VGG-19), BERT, and Llama models, spanning diverse modalities and tasks. Led and mentored interns, played an active role in recruitment, and worked on proof-of-concept (POC) initiatives aimed at advancing internal algorithmic capabilities. Represented the R&D team in client engagements and at industry events, including the 5th MLOps Conference, to present and demonstrate AryaXAI’s integrated AI solutions. AryaXAI Alignment Labs, AryaXAI.com (Arya.ai [Lithasa Technology Private Limited], an Aurionpro Company).
Researcher
MILA Quebec AI Institute
Research Intern
Bosch Corporate Research (Robert Bosch Research and Technology Center India)

GDG Cloud Kolkata
5159 members
Machine Learning Kolkata (formerly TFUG)
1214 members
