Aller au contenu Aller au pied de page

Expert Research Scientist on Polyfunctional Robot Perception & AI

ID de l'offre
505284
Publié depuis
06-Mai-2026
Organisation
Foundational Technologies
Domaine d'activité
Recherche et développement
Entreprise
Siemens Ltd., China
Niveau d'expérience
Expérimenté
Type de poste
Temps plein
Modalités de travail
Au bureau / sur site uniquement
Type de contrat
Contrat à durée déterminée (CDD)
Lieu
  • Pékin - Ville de Pékin - Chine
  • Shanghai - Ville de Shanghai - Chine
  • Suzhou - Province du Jiangsu - Chine
The Opportunity: Expert Research Scientist on Polyfunctional Robot Perception & AI
Are you a trailblazing researcher with a deep passion for enabling robots to truly "see," "understand," and "reason" about the world around them? Do you thrive on pushing the boundaries of perception, machine learning, and artificial intelligence to empower complex robotic systems with unprecedented autonomy and adaptability in dynamic industrial environments?
We are seeking an outstanding and highly innovative Expert Research Scientist on Polyfunctional Robot Perception & AI to join our core research team. In this pivotal role, you will be instrumental in the research, design, development, and deployment of advanced perception and AI algorithms that allow our next generation polyfunctional robots to perceive their surroundings, interpret complex scenes, make intelligent decisions, and learn from experience. This is about giving our robots the cognitive capabilities to excel in challenging, unstructured industrial settings.
________________________________________
What You Will Do:
Lead Perception & AI Research: Drive cutting-edge research in robot perception and AI, focusing on developing novel algorithms and methodologies for polyfunctional robots operating in industrial environments.
Sensor Data Fusion: Design and implement advanced sensor fusion techniques (e.g., Kalman filters, particle filters, deep learning-based fusion) to combine data from various modalities (e.g., LiDAR, cameras, depth sensors, force/torque sensors) for robust state estimation and environmental understanding.
Object Detection, Recognition & Tracking: Develop and optimize algorithms for real-time object detection, recognition, pose estimation, and tracking of known and unknown objects, including deformable objects and those in cluttered scenes.
Scene Understanding & Semantic Mapping: Research and develop methods for semantic scene understanding, 3D reconstruction, dynamic environment mapping, enabling robots to build rich, actionable representations of their surroundings.
AI-driven Task Planning & Decision Making: Develop AI algorithms for high-level task planning, decision-making under uncertainty, and intelligent behavior generation, leveraging perception outputs for adaptive robot autonomy.
Machine Learning for Robotics: Apply and advance machine learning techniques (e.g., deep learning, reinforcement learning, imitation learning) for various perception and AI challenges, including learning from demonstration and continuous self-improvement.
Performance Evaluation & Benchmarking: Rigorously evaluate perception and AI algorithms through simulation and real-world experiments, analyzing accuracy, robustness, and computational efficiency.
Collaboration & Documentation: 
o Work closely with Control Engineers to provide perception data for intelligent control strategies. 
o Collaborate with Embedded Systems Engineers to ensure efficient, real-time execution of perception and AI algorithms. 
o Partner with Mechatronics System Engineers to understand sensor capabilities and physical constraints. 
o Interface with System Integration Engineers for seamless integration, testing, and validation of perception and AI modules. Document research, algorithms, code, and results.
________________________________________
What You Bring:
Education: Master or Ph.D. in Robotics, Computer Science, Artificial Intelligence, Electrical Engineering, or a closely related field with a strong focus on robot perception, computer vision, or machine learning.
Research & Development Experience: Proven track record of significant research, design, and development contributions in robot perception, Large Language Models (LLMs) or Large Multimodal Models (LMMs), Reinforcement Learning (RL), Embodied AI / Embodied Agents, or Generative AI for complex behaviors or world models. 
Core Technical Expertise:
o Computer Vision: Expert-level proficiency in computer vision algorithms, including pose estimation, object detection, 3D perception, point cloud processing, and multi-view geometry.
o Sensor Fusion: Deep experience with advanced sensor fusion techniques, particularly combining data from disparate sources like IMU, vision, and joint states for robust state estimation and environmental modeling.
o Machine Learning & Deep Learning: Deep theoretical and practical experience with various ML/DL architectures (e.g., CNNs, RNNs, Transformers) and frameworks (e.g., TensorFlow, PyTorch) applied to robotic tasks.
o Task Planning: Solid understanding and experience with basic task planning methodologies, including classical AI planning, state-space search, or other decision-making frameworks for robotics.
o Robotics Software: Expert-level proficiency in C++ and Python, with extensive experience in robotics middleware (e.g., ROS/ROS 2).
o Simulation-Based Learning & Testing: Experience leveraging simulation environments (e.g., Gazebo, Isaac Sim) for data generation, training, and testing of perception and AI algorithms, including techniques like simulation-to-real (sim2real) transfer.
o Algorithm Optimization: Ability to optimize perception and AI algorithms for real-time performance on constrained hardware (e.g., GPUs, FPGAs, embedded processors).
o Mathematical Foundations: Solid understanding of linear algebra, probability theory, statistics, and optimization techniques relevant to perception and AI.
Problem-Solving: Exceptional analytical, experimental, and problem-solving skills, with the ability to tackle complex, unstructured perception and AI challenges in real-world scenarios.
Hands-on Skills: Demonstrated ability to implement, test, and deploy perception and AI algorithms on physical robotic platforms, including data collection, annotation, model training, and validation.
Communication: Excellent written and verbal communication skills, capable of presenting complex technical concepts clearly, publishing research findings, and collaborating effectively in a multidisciplinary team environment.
Passion: A deep passion for advancing the intelligence and perceptual capabilities of polyfunctional robots, and a desire to see your research make a tangible impact.
________________________________________
Bonus Points If You Have:
Familiarity with functional safety considerations for AI-driven systems in industrial applications.
Knowledge of explainable AI (XAI) or robust AI techniques for safety-critical applications.
Experience with semantic SLAM or dynamic environment mapping.
________________________________________
Why Join Us?
Impact: Be at the forefront of defining how polyfunctional robots perceive, understand, and interact with complex industrial environments, directly shaping their autonomy and capabilities.
Innovation: Work on truly novel research problems at the intersection of perception, AI, and advanced robotics, pushing the boundaries of what intelligent machines can achieve.
Resources: Access to state-of-the-art computing infrastructure, advanced robotic platforms, extensive sensor suites, and a dedicated team of experts.
Culture: A collaborative, intellectually stimulating, and supportive environment where your groundbreaking ideas are valued and encouraged.
Growth: Opportunities for professional development, continuous learning, and career advancement in a rapidly evolving and high-impact field.
________________________________________
Ready to empower polyfunctional robots with intelligence and perception?
If you are a pioneering research scientist eager to make a profound impact on the cognitive capabilities of advanced robotic systems, we encourage you to apply! Please submit your CV, a cover letter outlining your research interests and relevant experience, and links to your portfolio, publications, or GitHub profile.