Dr. Aleksei Staroverov | Robotics | Best Researcher Award

Dr. Aleksei Staroverov | Robotics | Best Researcher Award

Dr. Aleksei Staroverov | Artificial Intelligence Research Institute | Russia

Dr. Aleksei Staroverov is a distinguished researcher in the field of artificial intelligence and robotics, currently serving as a Senior Research Scientist at the Artificial Intelligence Research Institute (AIRI). He has made significant contributions to the development of Vision-Language-Action (VLA) models, reinforcement learning frameworks, and embodied AI systems, focusing on bridging the gap between simulated environments and real-world robotic applications. With a strong academic and professional background, he has consistently advanced state-of-the-art methodologies, mentoring research teams, driving high-impact publications, and pushing forward innovations in multimodal AI and autonomous robotics. His work demonstrates exceptional expertise in AI-driven robotic navigation, manipulation, and simulation-based learning, positioning him as a leading figure in his research domain.

Professional Profile

GOOGLE SCHOLAR

SCOPUS

Summary of Suitability

Dr. Aleksei Staroverov is a highly accomplished researcher specializing in Artificial Intelligence, Robotics, Reinforcement Learning (RL), and Vision-Language-Action (VLA) models. His academic background, professional achievements, and impactful research contributions position him as a strong candidate for the Best Researcher Award.

Education

Dr. Aleksei Staroverov earned his Doctor of Philosophy (Ph.D.) in Artificial Intelligence from the Moscow Institute of Physics and Technology (MIPT), where he specialized in advanced reinforcement learning techniques, robotic simulation frameworks, and multimodal AI model development. His doctoral research focused on developing adaptive VLA models capable of integrating visual, linguistic, and action-driven data for real-world robotics applications. He also holds a Specialist degree in High-Energy Propulsion Systems from Bauman Moscow State Technical University, where he gained deep expertise in high-performance computational modeling and control systems. Complementing his academic qualifications, he has successfully completed certifications in Deep Learning from DeepLearning.AI and Machine Learning from Stanford University, solidifying his foundation in cutting-edge AI methodologies.

Experience

Currently a Senior Research Scientist at AIRI, Dr. Aleksei Staroverov spearheads the development of advanced Vision-Language-Action models for embodied AI, focusing on reinforcement learning-driven fine-tuning strategies for robotic navigation and manipulation. He leads research initiatives, validates novel technical approaches, and guides cross-functional teams working on simulation-to-reality transfer in robotics. Prior to this, he served as a Research Scientist at VLA Research, where his work centered on adapting multimodal transformer models for reinforcement learning-based control systems, implementing algorithms in simulated environments, and transferring them to real-world robotic platforms. Earlier in his career, he worked as a Junior Researcher at the Federal Research Center “Computer Science and Control” of the Russian Academy of Sciences (FRC CSC RAS), contributing to the design of hierarchical reinforcement learning algorithms for solving complex navigation problems. Across these roles, he has consistently driven innovation, mentored young researchers, and contributed to high-impact advancements in the field of robotics and artificial intelligence.

Research Interests

Dr. Aleksei Staroverov research interests primarily lie at the intersection of reinforcement learning, multimodal transformer models, and embodied AI for robotics. He focuses on building advanced Vision-Language-Action frameworks capable of understanding complex real-world environments, enabling autonomous agents to perform intricate tasks with high adaptability. His work emphasizes simulation-to-real transfer, model fine-tuning, and adaptive policy learning in dynamic environments. Additionally, he explores hybrid AI architectures, integrating visual perception, natural language understanding, and motion planning to develop robust robotic systems capable of reasoning and executing context-aware actions in diverse environments.

Awards

Dr. Aleksei Staroverov has achieved remarkable recognition in the AI and robotics community, securing top honors in prestigious international competitions. He led his teams to victory at the Habitat Navigation Challenge in the ObjectNav phase and the NeurIPS MineRL competition, demonstrating exceptional expertise in developing cutting-edge algorithms for robotic navigation and reinforcement learning. These accolades highlight his capability to deliver state-of-the-art solutions to complex AI-driven robotics challenges and validate his leadership in advancing embodied intelligence research.

Publication Top Notes

Real-time object navigation with deep neural networks and hierarchical reinforcement learning
Year: 2020
Citations: 59

Hierarchical deep q-network from imperfect demonstrations in Minecraft
Year: 2021
Citations: 37

Forgetful experience replay in hierarchical reinforcement learning from expert demonstrations
Year: 2021
Citations: 32

Skill fusion in hybrid robotic framework for visual object goal navigation
Year: 2023
Citations: 14

Hierarchical landmark policy optimization for visual indoor navigation
Year: 2022
Citations: 12

Conclusion

Dr. Aleksei Staroverov contributions to artificial intelligence, embodied robotics, and reinforcement learning demonstrate his exceptional capabilities as a researcher and innovator. His interdisciplinary expertise, impactful publications, and leadership in advancing Vision-Language-Action models position him as a driving force in bridging simulation-based AI with real-world applications. Through his pioneering research, award-winning solutions, and collaborative initiatives, he continues to push the boundaries of autonomous robotics, contributing significantly to the progress of intelligent systems research and establishing himself as a highly deserving candidate for recognition.

Zihan Deng | Artificial Intelligence | Best Researcher Award

Dr. Zihan Deng | Artificial Intelligence | Best Researcher Award

Harbin Institute of Technology, China

Zihan Deng is a young and accomplished researcher in the field of imaging technology and computational tomography, with a strong foundation in deep learning and artificial intelligence. With a robust academic background and an array of interdisciplinary experiences, Deng has made significant contributions through high-impact publications, competitive grants, and patents. His expertise lies at the intersection of optical instrumentation and medical image analysis, and he continues to actively engage in scientific exploration with promising results.

Profile

Orcid

Education

Deng completed his undergraduate studies in Computer Science and Technology at Harbin Engineering University (2019–2023), ranking in the top 5% of his class. His academic curriculum included rigorous coursework in mathematics and computer science, scoring consistently above 90 in core subjects. He was subsequently recommended for direct admission into the graduate program at Harbin Institute of Technology, where he is currently pursuing his Master’s degree at the Institute of Ultra-Precision Optical Instrument Engineering under the mentorship of Professor Junning Cui and Academician Jiubin Tan. His research spans CT reconstruction, deep learning-based image enhancement, and X-ray detection technologies.

Experience

Deng has accumulated diverse experience through internships and collaborative projects. He served in leadership roles within student organizations and academic competitions, including receiving awards in national-level modeling and software contests. He undertook summer research at Tsinghua University’s IDG/McGovern Brain Research Institute and was later selected to join Germany’s PTB “Chief Engineer Class” as a visiting scholar. Professionally, he interned with Chengdu Shuzhilian Technology and Guangzhou CVTE, where he contributed to image processing and video enhancement projects. He has also played key roles in multimillion-yuan research collaborations with institutions like CGN Research Institute and GF High-End Semiconductor Imaging Systems.

Research Interest

Deng’s research interests revolve around imaging technology, deep learning, and CT reconstruction methods. He focuses on developing advanced algorithms for sparse-angle computed tomography, artifact reduction, and multi-view image correction using neural networks. His work integrates domain-specific knowledge from instrumentation science with state-of-the-art machine learning frameworks to improve image quality in both medical diagnostics and industrial inspection. He also investigates beam hardening correction and reconstruction under large field-of-view (FOV) conditions, addressing challenges in high-precision imaging systems.

Award

Over the course of his academic journey, Deng has received 11 scholarships and numerous accolades. These include five first-class and two second-class academic scholarships from Harbin Engineering University, the prestigious Xiaomi Scholarship, and the Outstanding Youth League Member Award. His undergraduate thesis on sparse-angle CT reconstruction was selected as an Excellent Graduation Project (top 2%). He has also won national-level awards in competitions such as the Mathematical Modeling Contest and the English Proficiency Championship.

Publication

Deng has authored or co-authored several influential papers in prestigious journals and conferences. His representative publications include:

  1. Deng Z., Wang Z., et al. (2024). “COO-DuDo: Computation Overhead Optimization Methods for Dual-Domain Sparse-View CT Reconstruction”, Expert Systems with Applications (JCR Q1, IF=7.5, in press) – cited in advanced CT algorithm research.

  2. Deng Z., Wang Z., Lin L., Wang S., Cui J. (2024). “Research on the Effectiveness of Multi-View Slice Correction Technology Based on Deep Learning in High-Pitch Spiral Scanning Reconstruction”, Journal of X-Ray Science and Technology (JCR Q2, IF=3.0) – applied in spiral CT systems.

  3. Wang Z.#, Deng Z.#, Liu F., et al. (2023). “OSNet & MNetO for Linear Computed Tomography in Multi-Scenarios”, IEEE Transactions on Instrumentation and Measurement (JCR Q1, IF=5.6) – widely cited in instrumentation imaging.

  4. Deng Z., Deng K., Wang Z., et al.. “Small Class Discussion-Based Teaching in Instrumentation Education”, The International Journal of Education – cited in engineering education reform discussions.

  5. Li Z., Li K., Deng Z., et al. (2024). “Assessment of Sheetlet Thickness in Human Left Ventricular Free Wall Using X-ray Phase-Contrast Microtomography”, Medical Image Analysis (JCR Q1, IF=10.9, accepted) – applied in cardiovascular research.

  6. Deng Z., Wang Z., Lin L., et al. (2025). “Computation Overhead Optimization Dual-Domain Network for Sparse-View CT Reconstruction”, ICASSP 2025 (CCF-B Conference) – in review, expected to support efficient CT image pipelines.

  7. Deng Z., Wang Z., Lin L., Wang S. “Hel-MUNet: Mamba-Unet with Helical Encoding for Clinical High Pitch Helical CT Reconstruction”, MICCAI 2025 (under review) – aligned with cutting-edge clinical imaging methods.

Conclusion

Zihan Deng exemplifies the next generation of research professionals driving innovation in imaging and artificial intelligence. Through a blend of strong theoretical foundation, hands-on project experience, and impactful publications, he has demonstrated exceptional capability in solving complex technical problems. With continued guidance under leading scholars and global exposure, Deng is well-positioned to become a prominent figure in the advancement of smart medical imaging and intelligent instrumentation.