Projects & Experience

Download CV (PDF)

Research Projects

Internalizing Safety via Weight-Level Activation Steering

A novel model editing technique to permanently embed transient safety steering vectors into Transformer model weights.

Transforms temporary activation interventions into persistent architectural alignment without inference-time compute overhead.

Achieved robust safety behavior comparable to standard runtime steering while preserving general coherence.

Internalizes safety concepts directly into the target architecture's residual stream projections.

Black-Scholes Modelling with Kolmogorov Arnold Networks

Applying KANs to financial derivatives pricing, exploring whether learnable activation functions improve on classical Black-Scholes approximations.

Working to improve the 72% accuracy achieved on the Synthetic European dataset by refining KAN architecture.

Tracked and optimized model performance: train loss 2.84, test loss 4.57.

Managed regularization (±2.05) to improve model stability and generalization.

Education

University College London (UCL) (upcoming)  — Master of Science in Machine Learning
Aug 2026 – Aug 2027
Sathyabama Institute of Science and Technology  — Bachelor of Engineering in Computer Science (AI and ML)
Sep 2022 – May 2026

Experience

Machine Learning Intern, Glance

Feb 2026 – Present

Built and optimized catalog retrieval pipelines for the Glance app, improving content discovery across millions of items.

Researched and reimplemented SOTA embedding techniques to develop internal catalog-focused embeddings tailored to Glance's content domain.

Achieved a 2× improvement in retrieval metrics by reimplementing SOTA techniques, significantly enhancing catalog search relevance and ranking quality.

Machine Learning Contributor (Contract), Shipd by Datacurve (YC W24)

Feb 2026 – Present

Awarded monetary prizes for top-tier performance in high-stakes predictive modeling and algorithmic problem-solving challenges.

Developed and optimized ML models under strict competitive constraints in a fast-paced environment.

Research Fellow, AI Safety Camp (AISC)

Jan 2026 – March 2026

Working with Ihor Kendiukhov to evaluate scalability and security guarantees of novel control protocol classes for advanced AI systems.

Extending Greenblatt et al.'s framework by designing hierarchical and parallel control topologies to optimize safety-usefulness trade-offs.

Conducting scaling experiments to verify generalization of control guarantees across diverse model capabilities.

Independent Interpretability Researcher, Mentored by David Africa (UK AI Security Institute)

Nov 2025 – Feb 2026

Conducted independent research on mechanistic interpretability with direct technical guidance from a Research Scientist at the UK AI Safety Institute.

Developed a framework to permanently embed steering vectors into model weights, enabling persistent safety behaviors without inference-time overhead.

Research Intern, International Institute of Information Technology, Hyderabad

May 2025 – Oct 2025

Researched Universal Semantic Representations and developed techniques for generating natural language from abstract syntactic-semantic structures.

Built Controlled Image-to-Text Generation systems for scientific images ensuring accurate, context-aware, domain-specific descriptions.

Conducted workshops on prompt engineering for linguistics researchers.

Machine Learning Intern, Co-build.tech

Dec 2024 – Jan 2025

Enhanced query performance by 25% by optimizing document embedding strategies for semantic similarity retrieval.

Automated mapping of legal clause dependencies, improving analysis efficiency by 30%.

Talks & Seminars

Controlled Image Generation  · Language Technologies Research Center (LTRC), IIIT Hyderabad
2025
Prompt Engineering for Linguistic Research  · Language Technologies Research Center (LTRC), IIIT Hyderabad
2025
Safety and Alignment of LLMs  · Sathyabama Institute of Science and Technology
2025
Impact and Application of Generative AI  · Google Developer Summit
2024
Introduction to Machine Learning and Scopes  · Sathyabama Institute of Science and Technology
2024

Grants

Lambda Labs Research Grant  ($1,000 (scalable to $5,000))

Awarded for research on interpretability and routing dynamics of Mixture-of-Experts (MoE) models during inference. Work involves mathematical analysis of expert load balancing, probing memorization vs. generalization through expert usage patterns, and inference-only diagnostic tools.

Service

Reviewer — NeurIPS 2025, AI-MATH Workshop

Reviewer — AAAI 2026, Workshop on Shaping Responsible Synthetic Data in the Era of Foundation Models