Kamesh R
Sathyabama Institute of Science and Technology

Hey there! I'm Kamesh R (காமேஷ்) from Chennai, India — an AI researcher deeply interested in reasoning systems, AI for science, and quantization techniques for efficient, scalable machine learning models. My work focuses on making AI systems think more logically, reason better, and run lighter without losing performance.

I'm passionate about exploring the fundamentals of machine learning theory and how reasoning capabilities can be enhanced in modern AI systems. I also enjoy working on open-source projects and collaborating with peers on research that pushes boundaries.

Beyond research, I'm proudly rooted in my culture. I have a deep love for தமிழ், my mother tongue, and enjoy studying its history, literature, and linguistic richness.

And when I'm not coding or writing papers, you'll probably find me cheering for Liverpool FC in the Premier League. ❤️


Experience

IIIT-Hyderabad
IIIT-Hyderabad
Research Intern, May 2025 - Present
Stealth Startup
Stealth Startup
AI Engineer Intern, Jan 2025 - Apr 2025
CobuildX.ai (formerly OptimisticAI.com)
CobuildX.ai (formerly OptimisticAI.com)
AI Engineer Intern, Dec 2024 - Jan 2025
Sathyabama Institute of Science and Technology
Sathyabama Institute of Science and Technology
Undergraduate Research Assistant, Sep 2024 - Dec 2024

Education

Sathyabama Institute of Science and Technology
Sathyabama Institute of Science and Technology
Department of Computer Science and Engineering
Undergraduate Student
, Sep 2022 - present
Selected Publications (view all )
Think Beyond Size: Adaptive Prompting for More Effective Reasoning

Kamesh R

International Conference on Learning Representations (ICLR)

Pretrained large language models (LLMs) are increasingly utilized across a wide range of natural language processing (NLP) tasks due to their impressive capabilities as few-shot learners. Recent techniques, such as chain-of-thought (CoT) prompting, have significantly advanced multi-step reasoning by introducing step-by-step decomposition, achieving state-of-the-art results on complex reasoning benchmarks. However, these approaches often rely on static prompting templates that do not adapt to task complexity or errors during the reasoning process. In this work, we introduce Adaptive Prompting, a dynamic and iterative framework designed to enhance reasoning by incorporating real-time adjustments to prompt structures and validation. Our results demonstrate that Adaptive Prompting significantly improves performance on diverse reasoning benchmarks, including arithmetic reasoning (GSM8K, MultiArith), logical reasoning, and commonsense tasks, achieving substantial accuracy gains compared to static prompting baselines. By integrating guided prompts, intermediate validation, and self-corrective steps, our approach enables smaller models to achieve competitive performance with larger counterparts, such as GPT-4, while maintaining computational efficiency. The framework achieves this without requiring fine-tuning or task-specific training data, highlighting the untapped potential of iterative reasoning methods.

Think Beyond Size: Adaptive Prompting for More Effective Reasoning

Kamesh R

International Conference on Learning Representations (ICLR)

Pretrained large language models (LLMs) are increasingly utilized across a wide range of natural language processing (NLP) tasks due to their impressive capabilities as few-shot learners. Recent techniques, such as chain-of-thought (CoT) prompting, have significantly advanced multi-step reasoning by introducing step-by-step decomposition, achieving state-of-the-art results on complex reasoning benchmarks. However, these approaches often rely on static prompting templates that do not adapt to task complexity or errors during the reasoning process. In this work, we introduce Adaptive Prompting, a dynamic and iterative framework designed to enhance reasoning by incorporating real-time adjustments to prompt structures and validation. Our results demonstrate that Adaptive Prompting significantly improves performance on diverse reasoning benchmarks, including arithmetic reasoning (GSM8K, MultiArith), logical reasoning, and commonsense tasks, achieving substantial accuracy gains compared to static prompting baselines. By integrating guided prompts, intermediate validation, and self-corrective steps, our approach enables smaller models to achieve competitive performance with larger counterparts, such as GPT-4, while maintaining computational efficiency. The framework achieves this without requiring fine-tuning or task-specific training data, highlighting the untapped potential of iterative reasoning methods.

All publications