Kamesh R

ML Intern, Glance  ·  MSc Machine Learning, UCL (Aug 2026)

Kamesh R

About

I'm a machine learning researcher from Chennai. Currently an ML intern at Glance working on catalog retrieval at scale — in August I'm moving to London to start an MSc in Machine Learning at UCL.

My research is on AI safety and mechanistic interpretability. I want to understand what's actually happening inside language models and use that to build systems that stay safe by default. Recent work: permanently embedding steering vectors into model weights, scaling experiments for AI control protocols (AI Safety Camp), and reasoning benchmarks that test models across 100+ languages and cultures.

I'm Tamil, from Chennai, and I follow Liverpool FC obsessively.


Publications

Probing Reasoning Flaws and Safety Hierarchies with Chain-of-Thought Difference Amplification

Kamesh R · NeurIPS 2025, LLM-Evals Workshop

Global PIQA: Evaluating Physical Commonsense Reasoning Across 100+ Languages and Cultures

Kamesh R · Preprint

Indian Grammatical Tradition-Inspired Universal Semantic Representation Bank (USR Bank 1.0)

Kamesh R et al. · AACL-IJCNLP 2025, BHASHA Workshop

PerplexMATH: Steering LLMs Toward Mathematical Reasoning

Jerome Francis, Kamesh R, Serena Pei · ICML 2025, NewInML Workshop (Poster)[poster]


Experience

Machine Learning Intern, Glance Feb 2026 – Present

Built and optimized catalog retrieval pipelines for the Glance app, improving content discovery across millions of items.

Researched and reimplemented SOTA embedding techniques to develop internal catalog-focused embeddings tailored to Glance's content domain.

Achieved a 2× improvement in retrieval metrics by reimplementing SOTA techniques, significantly enhancing catalog search relevance and ranking quality.

Machine Learning Contributor (Contract), Shipd by Datacurve (YC W24) Feb 2026 – Present

Awarded monetary prizes for top-tier performance in high-stakes predictive modeling and algorithmic problem-solving challenges.

Developed and optimized ML models under strict competitive constraints in a fast-paced environment.

Research Fellow, AI Safety Camp (AISC) Jan 2026 – March 2026

Working with Ihor Kendiukhov to evaluate scalability and security guarantees of novel control protocol classes for advanced AI systems.

Extending Greenblatt et al.'s framework by designing hierarchical and parallel control topologies to optimize safety-usefulness trade-offs.

Conducting scaling experiments to verify generalization of control guarantees across diverse model capabilities.

Independent Interpretability Researcher, Mentored by David Africa (UK AI Security Institute) Nov 2025 – Feb 2026

Conducted independent research on mechanistic interpretability with direct technical guidance from a Research Scientist at the UK AI Safety Institute.

Developed a framework to permanently embed steering vectors into model weights, enabling persistent safety behaviors without inference-time overhead.

Research Intern, International Institute of Information Technology, Hyderabad May 2025 – Oct 2025

Researched Universal Semantic Representations and developed techniques for generating natural language from abstract syntactic-semantic structures.

Built Controlled Image-to-Text Generation systems for scientific images ensuring accurate, context-aware, domain-specific descriptions.

Conducted workshops on prompt engineering for linguistics researchers.

Machine Learning Intern, Co-build.tech Dec 2024 – Jan 2025

Enhanced query performance by 25% by optimizing document embedding strategies for semantic similarity retrieval.

Automated mapping of legal clause dependencies, improving analysis efficiency by 30%.


Education

University College London (UCL) upcoming  — Master of Science in Machine Learning
Aug 2026 – Aug 2027
Sathyabama Institute of Science and Technology  — Bachelor of Engineering in Computer Science (AI and ML)
Sep 2022 – May 2026

Research Projects

Internalizing Safety via Weight-Level Activation Steering

A novel model editing technique to permanently embed transient safety steering vectors into Transformer model weights.

Transforms temporary activation interventions into persistent architectural alignment without inference-time compute overhead.

Mechanistic InterpretabilityAI SafetyPyTorchTransformers

Black-Scholes Modelling with Kolmogorov Arnold Networks

Applying KANs to financial derivatives pricing, exploring whether learnable activation functions improve on classical Black-Scholes approximations.

Working to improve the 72% accuracy achieved on the Synthetic European dataset by refining KAN architecture.

KANFinancial MLPyTorchQuantitative Finance

Grants

Lambda Labs Research Grant ($1,000 (scalable to $5,000))

Awarded for research on interpretability and routing dynamics of Mixture-of-Experts (MoE) models during inference. Work involves mathematical analysis of expert load balancing, probing memorization vs. generalization through expert usage patterns, and inference-only diagnostic tools.


Talks & Seminars

Controlled Image Generation  · Language Technologies Research Center (LTRC), IIIT Hyderabad
2025
Prompt Engineering for Linguistic Research  · Language Technologies Research Center (LTRC), IIIT Hyderabad
2025
Safety and Alignment of LLMs  · Sathyabama Institute of Science and Technology
2025
Impact and Application of Generative AI  · Google Developer Summit
2024
Introduction to Machine Learning and Scopes  · Sathyabama Institute of Science and Technology
2024

Academic Service

Reviewer — NeurIPS 2025, AI-MATH Workshop

Reviewer — AAAI 2026, Workshop on Shaping Responsible Synthetic Data in the Era of Foundation Models


Skills

Languages & Tools Python, C++, SQL, JAX, Linux, WandB, Modal
ML Frameworks PyTorch, JAX, Transformers (HF), DeepSpeed, vLLM, TRL, Unsloth
Libraries NumPy, SciPy, Pandas, Scikit-learn, Matplotlib, Seaborn, Plotly, Datasets (HF)