Deep Learning Approaches to Spacecraft Pose Estimation
Investigation and development of deep learning methodologies for spacecraft pose estimation using the SPEED dataset to improve accuracy and computational efficiency for autonomous proximity operations.
Requirements
- M.Sc. in Machine Learning, Data Science, Computer Science, Mathematics, Telecommunications, or similar
- Good knowledge of Python
- Software development skills
- Basic concepts of image processing
- Basic concepts of data science, concerning data analysis, data processing and deep learning
Description
Spacecraft pose estimation is essential for autonomous proximity operations in space, where vision systems face extreme lighting, reflective materials, and computational constraints. Traditional approaches using handcrafted features struggle with these challenges. This thesis explores how deep learning can address these limitations by leveraging the SPEED dataset: https://purl.stanford.edu/dz692fn7184
The purpose of this thesis is to investigate, develop, and evaluate deep learning methodologies for spacecraft pose estimation using the SPEED (Spacecraft Pose Estimation Database) dataset. This research aims to improve the accuracy and computational efficiency of determining the six-degree-of-freedom pose (position and orientation) of spacecraft during proximity operations, which is critical for autonomous docking, on-orbit servicing, and space debris removal missions.
The research will begin by examining current pose estimation techniques and their limitations while exploring relevant computer vision advances. The SPEED dataset will be analyzed and enhanced through data augmentation to simulate additional space-specific conditions, ensuring model robustness in operational scenarios.
The core research involves developing specialized neural networks for spacecraft pose estimation, including direct regression networks, PoseNet-style architectures, and transformer-based models. Given onboard computing limitations, the work will emphasize model efficiency through techniques like network pruning, quantization, and knowledge distillation.
Training will address the unique challenges of pose representation through specialized loss functions for orientation spaces and techniques to handle rotation discontinuities. Evaluation will compare the approaches against traditional methods across multiple metrics.
The final phase will develop an optimized end-to-end pipeline with pre-processing, inference, and temporal filtering for stable pose estimates. This research aims to advance autonomous spacecraft operations for satellite servicing, docking, and debris removal, bridging the gap between deep learning advances and operational space system constraints.